Most people measure their connection with a speed test and assume that number describes “the internet”. In reality, it describes one path to one set of test servers. Your favourite video app, a work VPN, a game server and a banking site can all take different routes — and in the evening those routes can become busier, longer, or less direct. The result is the classic complaint: “My broadband is fast, but this one service crawls after dinner.”
The internet is a patchwork of separate networks (called autonomous systems, or ASNs) that must hand traffic to each other. “Peering” is simply two networks agreeing to exchange traffic directly. When peering is healthy, your traffic takes a shorter path: fewer middlemen, fewer hops, and usually lower latency. When peering is missing or congested, traffic is pushed onto a longer route through a third party (often called transit), and that’s where evening slowdowns can show up.
An Internet Exchange Point (IXP) is a neutral meeting place where many networks connect their routers and exchange traffic efficiently. Think of it as a motorway junction: if two networks both connect there, traffic can “turn” directly between them instead of taking a long detour. The key detail for end users is this: a speed test might hit a nearby server on a short path, while a specific service might be forced onto a different, longer path that gets crowded at peak hours.
That’s why “my line is fine” can be true at the same time as “Netflix buffers” or “this app times out”. The bottleneck might not be your Wi-Fi or your local line at all — it might be the interconnect where your ISP hands traffic to the rest of the internet, or the particular route chosen to reach that service.
In November 2025, Vodafone publicly confirmed a major change: it planned to withdraw from public peering at neutral exchange points in Germany, including DE-CIX, and move much of its interconnection strategy to a different model. Reports at the time also noted that Vodafone would keep certain direct links with large streaming and cloud providers, while shifting the broader “long tail” of interconnects elsewhere.
Infrastructure changes like this don’t have to make everything worse — but they can reshape routes overnight. If a previously short, well-peered path becomes a longer path via a different interconnect, you may see higher latency, more jitter (latency variation), or occasional packet loss. And because evening traffic is heavier, any weak point is more likely to show itself between roughly 7pm and 11pm local time.
What makes this feel “weird” is selectivity. One service might stay perfect (because it still has a direct interconnect), while another service becomes flaky (because it now rides a congested or longer route). From a user perspective, it looks random. From a routing perspective, it often isn’t.
If you want to do more than reboot the router, the goal is to capture evidence that separates “local problem” from “route problem”. You don’t need to be a network engineer — you just need a few repeatable measurements you can run when things are bad (evening) and when things are good (morning), then compare.
Start with latency and loss. A simple ping to a stable target (your ISP gateway, a public DNS resolver, or the service hostname/IP if it allows it) can reveal spikes and drops. If your latency to the first hop (often your router or ISP edge) is stable but later hops jump dramatically, that’s a hint the issue is beyond your home. If you see packet loss, note where it begins — sustained loss that starts at a particular hop and continues afterwards is far more meaningful than a single hop that doesn’t respond to probes.
Then use route tracing. On Windows,
tracert shows the path; on macOS/Linux,
traceroute does the same. For deeper insight, tools like
mtr (macOS/Linux) or
pathping (Windows) combine route tracing with ongoing latency and loss sampling. Run the same test to the same target during peak time and off-peak, save the output, and you’ve already done more useful troubleshooting than most people ever provide support.
First, focus on patterns, not one-off numbers. A single high-latency hop isn’t automatically the culprit; some routers deprioritise diagnostic packets. What matters is whether the latency increase persists on following hops, and whether packet loss (if any) continues beyond a certain point. If the route length changes completely between off-peak and peak, that’s also useful: it suggests the network is steering traffic differently when capacity shifts.
Second, write down the basics every time: date, local time, whether you were on Wi-Fi or Ethernet, and which service you were trying to use. If you can, test on Ethernet at least once — it removes the most common red herring. Also note whether the problem affects multiple devices. If every device struggles with the same service during the same evening window, that strongly points away from a single device issue.
Third, test more than one destination. Pick one target that is usually stable (for example, a well-known public DNS resolver) and one that is the “problem service”. If the stable target stays fine while the problem service degrades, it supports the “route/service-specific” theory. If both degrade, you may be looking at a broader congestion issue closer to your access network.

There’s no magic button to “fix peering” from home, but you can often work around symptoms — and you can absolutely collect evidence that makes support take you seriously. The best approach is practical: isolate the problem, confirm it’s repeatable, and then decide whether to mitigate, escalate, or both.
DNS is the most misunderstood lever. Changing DNS can help when the issue is name resolution (slow lookups, wrong answers, or a DNS outage). It can also change which CDN node you reach for some services, which may indirectly change the route. But DNS will not fix a congested interconnect by itself. If your problem is buffering or high ping after a successful connection, DNS is rarely the root cause — treat it as a quick check, not a cure-all.
A VPN can be extremely useful — not as a permanent solution, but as a diagnostic. When you enable a VPN, you effectively change your internet “exit point” and therefore change routing. If the problem service becomes normal through a VPN, that suggests the issue is likely on the route between your ISP and that service (or its CDN), not inside your home. If the VPN makes no difference, the issue may be closer to your access network, your local congestion, or the service itself.
When you contact support, avoid vague statements like “the internet is slow”. Instead, send a small, structured bundle: the affected service(s), the exact times, and two traceroute/pathping outputs (one during the problem window, one outside it). Add a ping log if you have one, and specify whether the tests were done on Ethernet. This makes it far easier for a network team to spot where latency jumps or where routing changes.
Ask a precise question: “Can you check congestion or routing on the path to this destination during 19:00–23:00?” That moves the conversation from Wi-Fi resets to network investigation. If you can identify the hop where the route enters a third-party network (often visible in hostnames or AS information), include that too — it signals that you understand the problem category.
Finally, consider resilience. If your work or home relies on stable connectivity, a secondary connection (for example, a 5G hotspot or another fixed line) can be a practical backup. You don’t have to use it daily; even occasional failover during peak-time routing trouble can save hours of frustration — and it’s a straightforward way to confirm whether the issue is tied to one ISP’s routing choices.