If a Linux VPS “has a network problem,” the fastest way to solve it is to stop guessing and check the layers in the correct order: interface, IP address, route, neighbors, DNS, sockets, path, firewall, packet capture, and only after that the application layer. That is the difference between fixing something in five minutes and wasting two hours on angry restarts of services that were healthy all along. This article is the full triage sheet for the kinds of problems that show up in the real world: timeouts, packet loss, dead ports, broken DNS, half-working IPv6, TLS that appears stuck, and servers that “run” but do not answer the way they should.
And yes, this article sells VPS plans. Correctly. Because this exact kind of serious troubleshooting only becomes possible and efficient when you actually control the server: root SSH, firewall, routes, packet capture, IPv4, IPv6, proxying, tunnels, and ports. If you have moved past the shared-hosting phase and reached the point where you read guides like this, you have already moved past the limit of environments that hide the infrastructure from you.
The correct troubleshooting order
- Is the interface up?
- Does the VPS have the correct IP address?
- Is the default route correct?
- Can it reach the gateway and its network neighbors?
- Is DNS resolving correctly?
- Is the process that should answer actually listening on the right port?
- Does the path to the destination look normal?
- Is the firewall allowing exactly what you think it allows?
- Does packet capture confirm that traffic is actually arriving and leaving?
- Is the application responding, or is the problem above TCP?
This order matters. Too many administrators jump straight to the application and completely ignore the network layer underneath it. Then they declare the server “slow” or “broken” when the real cause is a bad route, bad DNS, wrong MTU, or a port that never had a listener in the first place.
The 60-second triage block
If you have one minute before tickets and calls start, run this block first:
hostname -f
ip -br a
ip -br link
ip route
ip route get 1.1.1.1
ss -tulpen
resolvectl status || cat /etc/resolv.conf
ping -c 3 1.1.1.1
ping -c 3 google.com
curl -4 -I --connect-timeout 5 http://example.com
journalctl -u systemd-networkd -u NetworkManager -u systemd-resolved -n 100 --no-pager 2>/dev/null
This block tells you very quickly whether the VPS has addresses, whether interfaces are up, whether a default route exists, whether anything is listening on ports, whether raw IP connectivity works, whether DNS resolves, and whether HTTP answers at all. It forces you to separate “the network is dead” from “the application is dead” almost immediately.
1. The interface, IP address, and link state
On modern Linux, the base command for interfaces and addresses is ip, not the old reflex of ifconfig. Start with the short view:
ip -br a
ip -br link
This immediately tells you which interfaces exist, which ones are active, and which addresses are assigned. When you need more context, go straight here:
ip addr show dev eth0
ip -s link show dev eth0
ip -d link show dev eth0
If you see rising RX or TX errors, if the interface is down, or if the device has a suspicious state, there is no point wasting time on DNS, TLS, or application logs. Fix the reality of the link first.
For information about the NIC and its counters, especially on KVM environments where the virtual NIC still exposes useful data, use:
ethtool eth0
ethtool -S eth0
Here you are looking for obvious problems: no link, bizarre negotiated speeds, or counters that suggest something is being lost or damaged below the application layer.
2. Routing: if the server does not know where to send traffic, nothing else matters
The classic routing checks are still the most useful:
ip route
ip route show table main
ip rule
ip route get 8.8.8.8
ip route get 203.0.113.20
ip route get 2001:4860:4860::8888
ip route get is one of the best triage tools on a Linux VPS. It tells you which source address, which interface, and which gateway Linux would actually use right now for a destination. That matters a lot when the server has multiple IPs, policy routing, extra addresses, VPN tunnels, Docker networks, or dual-stack IPv4 and IPv6.
If you see traffic leaving with the wrong source address, the problem is already local. Do not blame the upstream provider before you read the output of this command.
3. Network neighbors and the real gateway
If the route looks correct but packets still die immediately, check the neighbor table:
ip neigh
ip neigh show dev eth0
ping -c 3 <gateway-ip>
If the gateway does not resolve, if neighbors remain in states like FAILED or INCOMPLETE, or if ping to the gateway gets no reply, this is not an application problem. It is a connectivity problem between the server and the first real hop in the network.
On local IPv4 segments, arping is very useful:
arping -I eth0 <gateway-ip>
It tells you quickly whether the local relationship exists at all at the right level, especially when the IP layer looks “almost alive” but something still does not connect.
4. Sockets and ports: is anything listening or not?
A ridiculous number of “network issues” are really just “nothing is listening on that port.” This is where ss comes in:
ss -tulpen
ss -ltnp
ss -lunp
ss -pant
ss -s
The practical interpretation is simple:
ss -ltnpfor processes listening on TCPss -lunpfor processes listening on UDPss -pantfor active sessions, states, and PIDsss -sfor a quick summary when you only want the overall state
If the service should answer on 443 and nothing is listening on 443, you do not have a network mystery. You have a service, configuration, or bind-address problem.
If the process listens only on 127.0.0.1:8080 and you expected it to be exposed publicly, that is again a configuration problem, not a “mysterious” network issue.
5. DNS: separate bad resolution from lack of connectivity
People often say “the network is down” when what they really mean is “DNS is resolving badly.” Those are not the same problem.
Start with the state of the local resolver:
resolvectl status
resolvectl query example.com
resolvectl query google.com
cat /etc/resolv.conf
If the system uses systemd-resolved, resolvectl shows you the real active resolver picture. Then query the DNS chain directly with dig:
dig example.com A +short
dig example.com AAAA +short
dig example.com MX +short
dig -x 203.0.113.20 +short
dig @1.1.1.1 example.com A
dig @8.8.8.8 example.com A
dig +trace example.com
Here you answer concrete questions:
- Does the domain resolve at all?
- Does it resolve differently on different public resolvers?
- Is reverse DNS what you think it is?
- Is delegation broken further up the chain?
dig +trace remains one of the best commands for cases where DNS is genuinely broken and you want to see the authoritative chain, not just what some cache handed back to you.
6. Raw connectivity and loss: ping is simple, but not stupid
ping is still the fastest basic proof of IP reachability and packet-loss behavior:
ping -c 5 1.1.1.1
ping -c 5 8.8.8.8
ping -c 5 google.com
ping -4 -c 5 google.com
ping -6 -c 5 google.com
Run them in order. First a raw IP, then a hostname. That immediately separates transport failure from DNS failure. Then test IPv4 and IPv6 separately, because a lot of servers are “half broken” only on one address family.
Do not get hysterical about one dropped ICMP reply from a busy public host. But if the gateway drops, the resolver drops, and the application destination drops, you have enough evidence that the problem is real.
7. Path and MTU: where the road breaks
When traffic leaves the VPS but behaves badly somewhere along the path, move to hop-by-hop tools:
traceroute example.com
traceroute -T -p 443 example.com
tracepath example.com
mtr -rwzc 50 example.com
They play different roles:
traceroutefor the classic pathtraceroute -T -p 443when you care more about the path for TCP 443 than default UDP probestracepathwhen you suspect MTU and want a quick clue without special privilegesmtrwhen you want latency and loss measured over time, not one snapshot
mtr is one of the best tools for “it feels slow” complaints because it shows whether loss is local, somewhere along the path, or only apparent on a router that deprioritizes ICMP.
And MTU problems are absolutely real. If HTTPS stalls on large responses, if VPN traffic works for small packets but breaks on larger ones, or if some TLS negotiations seem frozen, MTU goes straight onto the short list of suspects.
8. HTTP, HTTPS, and the application layer: test the service, not just the network
Once IP, route, and DNS look healthy, move up to the application edge. This is where curl becomes the difference between “it feels slow” and “I know exactly where the time is going”.
curl -I http://example.com
curl -I https://example.com
curl -vk https://example.com
curl -4 -vk https://example.com
curl -6 -vk https://example.com
curl --connect-timeout 5 -m 15 -w '\ncode=%{http_code} ip=%{remote_ip} dns=%{time_namelookup} connect=%{time_connect} tls=%{time_appconnect} ttfb=%{time_starttransfer} total=%{time_total}\n' -o /dev/null -s https://example.com
Here you separate things very clearly:
- Is DNS taking too long?
- Does TCP connect quickly, but TLS stall?
- Does TLS finish, but the application delay first byte?
- Does IPv6 fail while IPv4 works?
The last command is worth memorizing. In a few seconds, it tells you whether the delay is in name resolution, connection, TLS handshake, or application response.
9. Firewall: read the real rules, not the ones you think you configured
On modern Linux, the starting point should be nftables, not vague assumptions like “I opened the port”.
nft list ruleset
nft list tables
nft -n list ruleset
nft monitor trace
Chain order, default policy, NAT, and priorities all matter. If you rely on memory instead of reading the actual rules, you are debugging blind.
If you use UFW, inspect that layer explicitly too:
ufw status verbose
ufw app list
But the rule stays the same: the final truth is in the ruleset, not in what you originally meant to configure.
10. Packet capture: the line between suspicion and proof
When simple commands stop being enough, go to tcpdump. This is also where it becomes obvious why serious troubleshooting needs a real VPS.
tcpdump -ni eth0
tcpdump -ni eth0 host 203.0.113.20
tcpdump -ni eth0 port 443
tcpdump -ni eth0 'tcp port 443 and host 203.0.113.20'
tcpdump -ni any udp port 53
tcpdump -ni eth0 -w /root/capture.pcap
These filters answer very concrete questions:
- Are SYN packets arriving at all?
- Do replies leave but never come back?
- Do DNS queries go out while replies disappear?
- Is the wrong interface being used?
- Is the firewall dropping traffic before the application sees it?
If you do serious Linux troubleshooting and never end up at packet capture sometimes, you simply have not reached the incidents that really matter yet.
11. Network logs: do not ignore what the system is telling you
Not everything shows up from interactive commands. Some problems are obvious in the journal:
journalctl -u systemd-networkd -n 100 --no-pager
journalctl -u NetworkManager -n 100 --no-pager
journalctl -u systemd-resolved -n 100 --no-pager
dmesg | tail -n 100
If the interface is flapping, if DHCP is renegotiating badly, if the NIC driver is reporting errors, or if the local resolver is misbehaving, you often see the first clean clues here.
12. Fast command sets for real incident patterns
SSH does not connect
ss -ltnp | grep ':22'
ip route
ping -c 3 <gateway-ip>
nft list ruleset
tcpdump -ni eth0 port 22
You check the listener, the route, the gateway, the firewall, and whether packets are actually arriving.
The website times out, but ping works
ss -ltnp | egrep ':80|:443'
curl -vk https://example.com
tcpdump -ni eth0 port 443
journalctl -u nginx -u apache2 -u httpd -n 100 --no-pager
Here the problem is almost always port, TLS, proxy, or application, not raw IP connectivity.
DNS resolves badly or inconsistently
resolvectl status
dig example.com A +short
dig @1.1.1.1 example.com A +short
dig @8.8.8.8 example.com A +short
dig +trace example.com
Here you separate local resolver problems, recursive cache differences, and authoritative delegation failures.
A remote service is slow only from the VPS
mtr -rwzc 50 remote.example.com
curl --connect-timeout 5 -m 20 -w '\nconnect=%{time_connect} tls=%{time_appconnect} ttfb=%{time_starttransfer} total=%{time_total}\n' -o /dev/null -s https://remote.example.com
traceroute -T -p 443 remote.example.com
This tells you whether the pain comes from path latency, TCP connection, TLS, or the remote application response.
There are packet-loss complaints
ping -c 20 1.1.1.1
mtr -rwzc 100 1.1.1.1
ip -s link show dev eth0
ethtool -S eth0
Do not make serious conclusions from a single ping -c 4. Measure properly and read the interface counters.
IPv6 feels “weird”
ip -6 addr
ip -6 route
ping -6 -c 5 google.com
curl -6 -vk https://example.com
dig example.com AAAA +short
A huge number of “intermittent” incidents are really isolated IPv6 problems while IPv4 stays healthy.
13. When the problem is not the host network, but container networking
If Docker runs on the VPS, a lot of “network problems” are actually confusion between service bind addresses, the container network, and published ports. At minimum, check this:
docker ps
docker network ls
docker network inspect bridge
ss -tulpen
ip route
tcpdump -ni any port 80 or port 443
If the application listens inside the container but the port is not published, if the bridge network is broken, or if the reverse proxy points to the wrong address, the symptoms will look like “broken networking” even though the real issue is orchestration and bind logic.
For that side of the story, read Managing Docker and Containers on a VPS: Best Practices for Stability and Performance.
14. When this guide clearly tells you that you need a real VPS
If your real troubleshooting involves tcpdump, nft list ruleset, ip rule, live captures, IPv6 testing, reverse proxies, VPN tunnels, policy routing, or deep debugging of ports and sockets, you have already outgrown shared hosting. You are no longer in the zone where “a simple control panel and a few clicks” are enough. You need a virtual server that you control completely.
That is where this article connects directly to ServerSpan Virtual Servers. Not because everything must be turned into a sales pitch, but because this level of diagnosis only becomes serious and reproducible when you have root SSH, route visibility, firewall access, packet capture, IPv4, IPv6, and resources you control yourself.
15. Which ServerSpan plan makes sense for this kind of work
If you only want to learn and run simple labs, vm.Entry at €4.99 per month is enough as a starting point. If you want a useful Linux VPS for real websites, small reverse proxies, VPN tests, or ordinary workloads, vm.Ready at €9.99 per month is the healthy floor. If you manage production services and want room for packet capture, multiple services, monitoring, and troubleshooting without hitting limits immediately, vm.Steady at €17.99 per month is the plan that makes the most sense for most serious users. If you run several services, heavier traffic, or a more complex stack, vm.Go is the natural next step.
The reason we push KVM plans here is simple: network troubleshooting gets much clearer when you have full control over the system, the network, and the resources. You stop wondering whether the environment is hiding something from you. You can see what is actually happening.
16. If you do not want to be the person running all these checks at 2:00 AM
Then the correct handoff point is not another plugin, another restart, or another hope. It is ServerSpan Linux Administration. This guide shows exactly how much control, patience, and discipline real troubleshooting requires. Some people want total control. Others just want the problem gone and fixed properly. Both are legitimate choices, but the second one means letting someone else own the incident through to the end.
17. Related reading that fits this guide perfectly
If you want the container side of the same discipline, read Managing Docker and Containers on a VPS: Best Practices for Stability and Performance. If you want the operational mindset behind vague performance complaints, read The Reality of “My Server Is Slow” Tickets. And if you keep confusing memory pressure with network trouble, go straight to Linux Swap vs RAM: The Definitive Guide to Memory Management on VPS.
The practical answer
The fastest way to fix networking on a Linux VPS is not one magic command. It is a disciplined order and the right tools: ip for addresses and routes, ss for sockets, ping for basic reachability, dig and resolvectl for DNS, traceroute, tracepath, and mtr for path analysis, curl for the application layer, nft for firewall truth, and tcpdump when you need proof. If you remember nothing else from this article, remember the troubleshooting order and stop jumping straight to the application every time the network looks suspicious.
If you are still trying to do this level of work on hosting that does not give you root, packet capture, firewall control, and route visibility, you are sabotaging your own troubleshooting. Get a real KVM VPS or hand the responsibility off through managed Linux administration. This guide is proof that serious troubleshooting requires serious control.
Source & Attribution
This article is based on original data belonging to serverspan.com blog. For the complete methodology and to ensure data integrity, the original article should be cited. The canonical source is available at: Network Troubleshooting Commands on Linux VPS: Fix Issues Fast.