Learn how to configure and troubleshoot Linux networking. IP addresses, routing, DNS, and common network issues.
Network configuration is essential for Linux administration. This guide covers configuration and troubleshooting.
# Configure IP address
sudo ip addr add 192.168.1.100/24 dev eth0
# Bring interface up
sudo ip link set eth0 up
# Add default route
sudo ip route add default via 192.168.1.1
# Configure DNS
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
# List connections
nmcli connection show
# Configure connection
sudo nmcli connection modify eth0 ipv4.addresses 192.168.1.100/24
sudo nmcli connection modify eth0 ipv4.gateway 192.168.1.1
sudo nmcli connection modify eth0 ipv4.dns "8.8.8.8 8.8.4.4"
sudo nmcli connection up eth0
# Check interface status
ip link show
# Test connectivity
ping 8.8.8.8
# Check routing
ip route show
# DNS resolution
nslookup example.com
dig example.com
# Network statistics
ss -tuln
netstat -tuln
Configure networks using ip commands or NetworkManager. Use ping, dig, and ss for troubleshooting.
For Network Configuration and Troubleshooting in Linux, define pre-deploy checks, rollout gates, and rollback triggers before release. Track p95 latency, error rate, and cost per request for at least 24 hours after deployment. If the trend regresses from baseline, revert quickly and document the decision in the runbook.
Keep the operating model simple under pressure: one owner per change, one decision channel, and clear stop conditions. Review alert quality regularly to remove noise and ensure on-call engineers can distinguish urgent failures from routine variance.
Repeatability is the goal. Convert successful interventions into standard operating procedures and version them in the repository so future responders can execute the same flow without ambiguity.
For Network Configuration and Troubleshooting in Linux, define pre-deploy checks, rollout gates, and rollback triggers before release. Track p95 latency, error rate, and cost per request for at least 24 hours after deployment. If the trend regresses from baseline, revert quickly and document the decision in the runbook.
Keep the operating model simple under pressure: one owner per change, one decision channel, and clear stop conditions. Review alert quality regularly to remove noise and ensure on-call engineers can distinguish urgent failures from routine variance.
Repeatability is the goal. Convert successful interventions into standard operating procedures and version them in the repository so future responders can execute the same flow without ambiguity.
For Network Configuration and Troubleshooting in Linux, define pre-deploy checks, rollout gates, and rollback triggers before release. Track p95 latency, error rate, and cost per request for at least 24 hours after deployment. If the trend regresses from baseline, revert quickly and document the decision in the runbook.
Keep the operating model simple under pressure: one owner per change, one decision channel, and clear stop conditions. Review alert quality regularly to remove noise and ensure on-call engineers can distinguish urgent failures from routine variance.
Repeatability is the goal. Convert successful interventions into standard operating procedures and version them in the repository so future responders can execute the same flow without ambiguity.
For Network Configuration and Troubleshooting in Linux, define pre-deploy checks, rollout gates, and rollback triggers before release. Track p95 latency, error rate, and cost per request for at least 24 hours after deployment. If the trend regresses from baseline, revert quickly and document the decision in the runbook.
Keep the operating model simple under pressure: one owner per change, one decision channel, and clear stop conditions. Review alert quality regularly to remove noise and ensure on-call engineers can distinguish urgent failures from routine variance.
Repeatability is the goal. Convert successful interventions into standard operating procedures and version them in the repository so future responders can execute the same flow without ambiguity.
Practical game day scenarios for CI/CD: broken rollbacks, permission issues, and slow feedback loops—and how we fixed them.
Systemd Service Reliability Patterns. Practical guidance for reliable, scalable platform operations.
Explore more articles in this category
Concrete systemd unit patterns that reduced flakiness: restart policies, resource limits, and structured logs.
Concrete systemd unit patterns that reduced flakiness: restart policies, resource limits, and structured logs.
Concrete systemd unit patterns that reduced flakiness: restart policies, resource limits, and structured logs.