Test with ECMP on Linux

I was reading an article about ECMP (Equal Cost Multipath) for traffic load sharing, and it brought back memories of my previous traffic engineering tests. It seems simple at first glance, but it’s actually more complex—especially when it comes to policy-based routing.

The challenge lies in determining traffic redirection and sharing in a session-wise connection, whether with or without NAT, across multiple links or circuits with different latencies. There’s also the complication of firewall interception with asymmetric return traffic. These factors make achieving ideal traffic load sharing quite difficult.

Of course, if tunneling is involved, things get simpler. It essentially blinds both endpoints and allows you to add two routes with the same metric in overlay routing. However, it doesn’t clearly explain why load-sharing performance behaves the way it does.

What about service enhancement? If the primary link becomes congested, should the secondary link pick up some of the traffic? That’s not exactly round-robin behavior—it would require active measurement and monitoring of the links. Maintaining session flow on the primary link while redirecting new flows to the secondary link sounds ideal, but it’s difficult to implement. For MPLS-TE, that’s straightforward—but what if you have two internet links, like one DIA (Direct Internet Access) and one mobile network? How would you handle that?

Well, just for fun, I haven’t done any serious measurements yet. But after setting up load sharing on my node, it seems to be working—though I haven’t really thought through the next steps. Running a Speedtest shows that the flows (by ports) are transmitting separately. Hmm… not ideal, but not bad either. But what about other applications? If they’re using two different IP addresses for outgoing traffic—ahhhh…

Let’s discuss this, bro.


Enable 2 Multipath load sharing
sudo ip route add default scope global \
nexthop via 192.168.X.X dev XXX weight 1 \
nexthop via 192.168.X.X dev XXX weight 1

For multipath routing, disabling connection tracking for locally-generated traffic helps
sudo sysctl -w net.netfilter.nf_conntrack_tcp_loose=0

Enable Layer 4 Hashing
sudo sysctl -w net.ipv4.fib_multipath_hash_policy=1

Enable IP Forwarding
sudo sysctl -w net.ipv4.ip_forward=1

Force More Aggressive Flow-Based Balancing:
Set rp_filter to 0 (disable reverse path filtering) so the kernel won’t drop asymmetric traffic

sudo sysctl -w net.ipv4.conf.all.rp_filter=0

Flush all route cache
sudo ip route flush cache

#ECMP #Linux #Internet #Routing #IP #Firewall #Tunneling #MPLS #trafficEngineering #ChatGPT

Looking glass function provided by RIPE Atlas?

I performed some traceroute tests using the public looking glass of another organization/provider. I found that some test functions, like Ping and Traceroute, were launched using RIPE Atlas probes. It looks impressive and kind of funny.

In the previous year, the provider developed a web interface and API to launch commands from their own PE (Provider Edge) or Internet BG (Border Gateway) routers and return the results. The geographical router list allows users to select region-based tests.

This seems to be a new method using RIPE Atlas, where queries can be made via an API. The web interface lets users select which probe to use for the measurement, deducting the web provider’s “RIPE Atlas Credits” for each test.

However, I’m wondering — since looking glass aims to provide insights into a specific network provider’s or AS owner’s network — if we’re using this method, why not just go to the official RIPE Atlas website to launch the test?

Well, I guess the more user-friendly web portal makes it easier for users.

Pingnetbox – http://www.pingnetbox.com

#ripe #atlas #lookingglass #measurement #ping #traceroute #test #internet #AS #chatgpt #proofreading

Model Training on AMD 16-core CPU with 8GB RAM running in a virtual machine for Bitcoin Price Prediction – Part 2 – Updated

Continuing with Over 500,000+ Data Points for Bitcoin (BTC) Price Prediction

Using the Python program, the first method I tried was SVR (Support Vector Regression) for prediction. However… how many steps should I use for prediction? 🤔

Previously, I used a Raspberry Pi 4B (4GB RAM) for prediction, and… OH… 😩
I don’t even want to count the time again. Just imagine training a new model on a Raspberry Pi!

So, I switched to an AMD 16-core CPU with 8GB RAM running in a virtual machine to perform the prediction.

  • 60 steps calculation: Took 7 hours 😵
  • 120 steps: …Man… still running after 20 hours! 😫 Finally !!! 33 Hours

Do I need an M4 machine for this? 💻⚡

ChatGPT provided another approach.
OK, let’s test it… I’ll let you know how it goes! 🚀

🧪 Quick Example of More Time Steps Effect

Time Step (X Length)Predicted AccuracyNotes
30⭐⭐⭐Quick but less accurate for long-term trends.
60⭐⭐⭐⭐Balanced context and performance.
120⭐⭐⭐⭐½Better for long-term trends but slower.
240⭐⭐Risk of overfitting and slower training.

#SVR #Prediction #Computing #AI #Step #ChatGPT #Python #Bitcoin #crypto #Cryptocurrency #trading #price #virtualmachine #vm #raspberrypi #ram #CPU #CUDB #AMD #Nvidia

Model Training Using TensorFlow on Raspberry Pi 4B (4GB RAM) for Bitcoin Price Prediction

The development of a CRYPTO gaming system https://www.cryptogeemu.com/ has been ongoing for around two years. What does it actually do? Well… just for fun!

The system captures data from several major crypto market sites to fetch the latest price list every minute. It then calculates the average values to determine the price. Users can create a new account and are given a default balance of $10,000 USD to buy and sell crypto—but there’s no actual real-market trading.

The Thought Process

Suddenly, I started wondering:
How can I use this kind of historical data? Can I make a prediction?

So, I simply asked ChatGPT about my idea. I shared the data structure and inquired about how to perform predictions.

ChatGPT first suggested using Linear Regression for calculations. However, the predicted values had a large difference compared to the next actual data point.

Next, it introduced me to the Long Short-Term Memory (LSTM) method for training under the TensorFlow library.

I fed 514,709 lines of BTC price data into the training program on a Raspberry Pi 4B (4GB RAM).
The first run took 7 hours to complete the model !!!!!!!!!!!!!!!!!

But the result… um… 😐

I’m currently running the second round of training. I’ll update you all soon!

Sample Data:

YYYY/MM/DD-hh:mm:ss  Price  
2025/02/17-20:06:09 95567.20707189501
2025/02/17-20:07:07 95582.896334665

P.S.: I’m not great at math. 😅

#BTC #Bitcoin #TensorFlow #AI #CryptoGeemu #RaspberryPi #Training #Crypto #ChatGPT #LinearRegression #LSTM #LongShortTermMemory

Deepseek 1.5b vs 8b version

Well, we all expect that 1.5b and 8b may have a different of AI’s knowledge.

We made a test,
1. 1.5b we are on Raspberry PI 4B 4G Ram.,
2. 8b on virtual machine with AMD Radeon and 16G Ram on Ubuntu.

We only ask a question.

“what is the difference between you and chatGPT”

  • 1. 1.5b versions

  • 2. 8b versions

The knowledge base of course 8b will be better. However, we will most concern of the resource usage. Can Raspberry PI CPU base can process this efficient?

#deepseek #AI #CPU #raspberrypi #GPU #nvidia #CUDB #AMD

Deepseek on Raspberry PI?????

Tech guys are interested in how AI and LLM model processing on an IOT, low power devices such as Raspberry PI.

But??!!!!

NO GPU!!!!!!!!!!!!

How to run the AI model????

OK, We dont want to talk about how to install and run on OLLAMA.

We have tried on 1.5b version of Deepseek on our PI4 4G RAM device.

Amazing that it works! However, you cannot expect the response time and token would be good enough for fast response.

By this kind of success, we can imagine that more other model can be running on CPU based IOT device. Therefore, will the home assistant widely adopt?

Let see……………….