Nginx and IP_Hash with multiple clients using one fixed IP address

Hello All
How do you solve the problem when you use the ip_hash setting in the nginx.conf file and when the server is accessed from a company that has a fixed IP address?
With other words - when all clients come from same IP.
Thanks.

Hello Daniel,

If all incoming requests are coming from the same IP address, then Nginx will simply route all those requests through the same upstream server.

How are the IP address going to be the same?

Even on a LAN you’re going to have different IP addresses. AFAIK.

Don

This looks like what you want.

sticky cookie srv_id expires=1h

Yes, Don, computers on the LAN will have individual addresses.

But it’s possible for them all to communicate with the outside world by sharing a single public routed external IP address by using NAT.

Hello Don
I tested from 3 devices (desktop PC, Laptop and mobile). Problem is - all 3 are connected to same router and outside IP address is same for all 3 devices. It means, ip_hash send all 3 on same backend server because all of them have same IP.

@RchdR yes, I saw this too but it is for nginx+ (paid version) and it cost $2500/year

@Jane yes, a small office with 3-4 employees uses the same router for internet and they all have the same external IP.
I tried the round robin and least_conn options, but they always send me to both servers at the same time (or too fast, so I can’t do anything).
I’m wondering if anyone has a conf file that works in the case of the same external IP addresses and distributes the traffic correctly?
I’ve spent a lot of time with ChatGPT but we haven’t found a solution.
In addition to nginx, I also tried HAProxy but without success.

Have you looked at some of the packages in Linux firewall distros like pfsense and opnsense? I know haproxy used to be a package for those distros but they also had others, some official some not, so delving into some forums might be needed.

Fwiw

@RchdR I haven’t researched it because I don’t know Linux at all.
I hope someone has a solution though. What comes to mind is the custom hash variable that each client (web browser) sends to nginx, which then forwards it to the next backend server in the sequence. That’s actually how I understand load balance and that’s how standard round-robin should work, but unfortunately it doesn’t work for me.

Some firewalls provide different load balancing methods in a multi wan

https://docs.netgate.com/pfsense/en/latest/multiwan/load-balance-and-failover.html

But as you are coming from a single wan why do you need nginx?

Likewise I’ve seen round robin in a Linux cpu scheduling context, windows has its own version.

Let me explain: small office use single router to access the internet and they external IP is 123.456.789.10. Like I use 3 devices to use the internet and each of my device have same external IP. I have no idea how to explain otherwise.
My app is on some Amazon server, running on https://example.com:1234
basically, behind port 1234 is nginx and it is configured to route traffic to 3 apps running on ports 1001, 1002 and 1003.
It means, people from exampple office navigate to https://example.com:1234 and nginx should forward first client to app running on port 1001, second to 1002, third to 1003, fourth to 1001, fifth to 1002 etc.
It means, I have only 3 apps running and if there are 30 people, 10 of them should be forwarded by nginx to app running on port 1001, 10 on port 1002 etc.
With other words, I want to use nginx to divide traffic to 3 backend’s because otherwise (if all people access app running on port 1000, then all 29 of users need to wait when user 5 (for example) execute some heavy operations.
Using nginx, only 9 of them will wait. Of course, I can run (in this example) 10 instances and only 3 users will use one backend.
I can not imagine that nobody else have the same problem.
If using ip_hash on nginx - yes, it works, but people using same external IP will be forwarded to same backend. I also can not imagine that each user in same office have different external IP?
Linux is blackbox for me and I have no idea where to start. My apps are Windows and I’m not sure
how to use Linux in that case.

Right so you have nginx on your server and not at your clients site.

Have you tried amazons’ offering and got that working first?

What is Load Balancing? - Load Balancing Algorithm Explained - AWS.

Even then do you really need load balancing as you are only using one server? Load balancing is really for multiple server instances.

If you want to stop one user from using all the available bandwidth then you need to use bandwidth throttling.

OK let me try this way: I need to divide 30 clients, using same external IP, to 10 backends running on same server and different ports - 3 clients per backend.
As backend, NetTalk can be used, or Handy tools server app - whatever.
Currently, I can only achieve that all 30 are forwarded to same IP, using ip_hash setting. It looks like sticky cookie will work but it cost $2500/year.

NATS on the client router is the table of browser tab sessions that’s been mentioned above. Another factor that’s not been mentioned is running multiple browsers and/or tab sessions on a desktop session which could be running multiple desktop instances.

I can’t see any point in load balancing the clients web browsers only the web servers but maybe you have a reason to do this? Either way if you want to do it part with $2500.

If you’re using NetTalk then you dont need load balancing for 30 users. You’d need thousands of users (at least).

The short answer to your question though is that the client needs to include a header that the load balancer can use (especially if the server maintains a session for the user.) Clearly IP address is not sufficient in your case.

Sometimes the client can pass the value as a cookie. Thats the most common if the client is a browser. If its an API client you have more flexibility.

Consult your load balancer docs for cookie based redirection.

@Bruce Imagine that some process take 5 minutes to finish (some heavy calculation) and all users need to wait. It is better if 2 users wait instead of 29.
Your suggestion is OK in theory and I already generated uniqueID header per client and nginx “see” it (I checked the log) and I used it with “hash” and remote_addr like this:

    upstream myapp {
        hash $remote_addr$cookie_uniqueID consistent;
        server localhost:2000;
        server localhost:2001;
    }

but it simply does not work (I also try uniqueID alone and many other combinations…)
I always can run 10 apps listening on different ports and send to client 10 different url-s but I wanted to try to use one url for all.

This wouldn’t happen on NetTalk because each incoming request is processed on its own thread. So one user cannot “block” other users. (You haven’t mentioned which server you are actually using for the app, so I can’t comment on your case, but I wanted to clear up the NetTalk situation because you used it as an example.)

Unfortunately I’m not an nginx user so I can’t really comment on that specifically. Obviously if it’s not behaving as expected I’d recommend asking on an nginx forum - you’ll find more nginx people there than here.

@Bruce I’m using “in house” solution (like I described few months ago) to display HTML content based on client’s request. As I do not have SSL and other advanced options, I wanted to use nginx to solve that. Will look in the nginx forum, thanks.