Linux runs the internet. Most web servers, cloud infrastructure, network equipment, and security tools run on it. If you work in IT or security and you are not comfortable on Linux, you have a serious gap. This room fills that gap — starting from zero, building to functional, with real examples and real context throughout.
Linux is not one thing — it is a kernel (the core) surrounded by a collection of software. Different organizations package that kernel with different software and call the result a "distribution" or distro. There are hundreds of them. In professional IT and security work, you will mostly see four.
The good news is that once you learn one, the others are maybe 15% different. The commands, concepts, and file structure are nearly identical. The main differences are the package manager (how you install software) and some minor config file locations.
The most popular Linux distro for desktop use and learning. Great hardware support, huge community, and most tutorials online use it. Always use LTS versions (24.04 LTS, 22.04 LTS) — they get security updates for 5 years. Package manager: apt.
Red Hat Enterprise Linux. The dominant enterprise server OS at large companies — banks, hospitals, government. Requires a paid subscription in production but is free for developers. If you want enterprise Linux jobs, learn this. Package manager: dnf.
A community-maintained rebuild of RHEL that is completely free. Same packages, same behavior, no subscription needed. This is what you use to practice RHEL skills in your home lab. Also very common in smaller companies that need RHEL compatibility without the cost. Package manager: dnf.
The stable, minimal choice. Ubuntu is actually built on top of Debian. Known for being rock-solid and never breaking. Common on web servers and anywhere stability matters more than having the latest packages. Package manager: apt (same as Ubuntu).
Choosing a Linux distro is like choosing a car brand. A Ford and a Toyota both have four wheels, a steering wheel, and an engine. You drive them the same way. The differences are under the hood and in the dashboard details. Learn to drive on Ubuntu. The skills transfer directly to everything else.
Before you touch anything real, you need a safe place to practice. Virtual machines are that place. You can install Linux, break it completely, and restore it to a clean state in under a minute. This is not optional — this is literally how the industry trains people.
If you already set up VirtualBox or VMware for the Windows room, you are already halfway there. If not, go back to the Windows room's lab section and install one of them first.
Go to ubuntu.com/download/desktop and grab the LTS version — the file ending in .iso, about 5 GB. This is the installation image. You will point your VM at this file and it boots from it like a DVD.
Open VirtualBox, click New. Give it a name like "Ubuntu-Lab". Under ISO Image, browse to your downloaded .iso file. VirtualBox detects it as Ubuntu automatically. Tick "Skip Unattended Installation" — you want to do the install manually so you actually learn it.
Hardware: RAM 2048 MB minimum (4096 is much better). CPUs: 2. Hard disk: create new, 25 GB minimum, VDI format, dynamically allocated (it does not use 25 GB of real disk space immediately — it only grows as you add files).
In VirtualBox, go to Settings > Network before booting. You have a few adapter types:
NAT (default) — the VM shares your host machine's internet. Easy, no configuration needed. The VM can reach the internet but you cannot SSH into it from your host without port forwarding.
Bridged Adapter — the VM gets its own IP on your physical network, as if it were a real computer plugged into your router. Other devices, including other VMs, can reach it directly. Use this for any VM that needs to be reachable.
Host-Only Adapter — creates a private network that only exists between your host machine and your VMs. No internet access from the VM. Use this to build isolated lab environments where you do not want VMs touching the real network.
For a multi-machine lab (e.g., Ubuntu desktop + Rocky Linux server), give each VM a Host-Only adapter so they can talk to each other. Add a second adapter set to NAT if you also need internet access on them.
Start the VM. It boots from the ISO and shows you an Ubuntu welcome screen. Click Install Ubuntu. Choose your language. On installation type, select Minimal installation to save space and keep things clean.
For disk: choose Erase disk and install Ubuntu. Do not panic — this only erases the virtual disk we just created, not your real computer. Click Install Now, confirm. Set your name, computer name (keep it simple, like "ubuntu-lab"), username, and password.
Wait about 10 minutes. When it finishes, it asks to restart. Click Restart Now. When it prompts you to remove the installation medium, just press Enter.
After Ubuntu boots, in the VirtualBox menu at the top click Devices > Insert Guest Additions CD Image. A notification pops up inside Ubuntu asking if you want to run it. Click Run. Enter your password when prompted.
This installs drivers that give you: proper screen resolution (instead of the tiny 800x600 default), clipboard sharing between your real computer and the VM, and drag-and-drop file transfer. Restart the VM when it finishes.
This is the most important step. In VirtualBox: Machine menu > Take Snapshot. Name it "Clean Ubuntu Install". Now if you destroy everything trying out commands, you restore this snapshot and you are back to a fresh system in 30 seconds. Take snapshots regularly — any time you get to a good working state you want to preserve.
Download the Rocky Linux 9 Minimal ISO from rockylinux.org. Same VM creation process, but use only 1 GB RAM and 20 GB disk — no desktop, no GUI, just a command line. This is how real servers run. When the VM boots after install, you see only a text login prompt. That is correct and intentional. Type your username, press Enter, type your password, press Enter. You are in.
Run two VMs at once. Ubuntu for your desktop to work from, Rocky Linux server to practice on. This mirrors the actual setup in most companies — you are on a Windows or Mac workstation, SSH-ing into Linux servers. It also gets you comfortable with the terminal-only server experience much faster than a desktop VM would.
Linux has no drive letters. There is no C:\ or D:\. Everything — every file, every device, every process — lives somewhere under a single root directory called /. This trips up everyone coming from Windows at first, but once it clicks it is actually cleaner.
Imagine a tree. The trunk is /. Every branch coming off it is a subdirectory. Every leaf is a file. There is only one tree. Everything hangs off that single trunk. Windows has multiple trees (C:\, D:\, etc.). Linux has one. This means you never have to wonder "which drive is this on?" — it is always somewhere under /.
/ # The root. Everything starts here.
├── bin # Essential programs everyone can use (ls, cp, mv, cat)
├── sbin # System programs for admins (fdisk, iptables, reboot)
├── etc # ALL configuration files live here. SSH config, cron jobs,
│ # network config, user passwords (hashed), service configs.
│ # This is the most important directory for an admin.
├── home # Home directories for regular users
│ ├── weston # Everything belonging to user "weston"
│ └── bob # Everything belonging to user "bob"
├── root # Home directory for the root user (NOT /home/root)
├── var # Variable data — things that change constantly
│ └── log # All system logs live here (/var/log/syslog, /var/log/auth.log)
├── tmp # Temporary files. Wiped on reboot. Anyone can write here.
│ # Attackers love /tmp for staging files.
├── usr # User programs installed by packages
│ ├── bin # Most commands you type (python3, curl, git, vim)
│ └── lib # Libraries those programs depend on
├── opt # Third-party software that installs itself (Wazuh agent, custom tools)
├── proc # Virtual filesystem. Not real files — represents running processes.
│ # /proc/1234/ contains info about process with PID 1234.
├── dev # Device files. /dev/sda is your first hard disk.
│ # /dev/null is the void — redirect output here to discard it.
└── boot # Kernel and bootloader files. Do not touch unless you know what you are doing.
The directory you will spend the most time in as an admin is /etc. SSH config? /etc/ssh/sshd_config. User accounts? /etc/passwd. Cron jobs? /etc/cron.d/. DNS resolver config? /etc/resolv.conf. Learn this directory like your own home.
When investigating a compromised Linux machine, /tmp and /var/tmp are the first places I look. Attackers routinely drop tools there because they are world-writable — any user can create files in them. I have found full attack toolkits in /tmp more times than I can count. Also check /dev/shm — another world-writable location that many investigators miss.
These are the commands you will use every single day. Do not try to memorize them all at once — open your Ubuntu VM, follow along, and type each one. Muscle memory comes from doing, not reading.
pwd # Print Working Directory — "where am I right now?"
# Output: /home/weston
ls # List files in current directory
ls -l # Long format — shows permissions, owner, size, date
ls -la # Long format + show hidden files (files starting with .)
ls -lh # Long format + human-readable sizes (KB, MB, GB)
ls /etc # List files in /etc without going there first
cd /etc # Change Directory to /etc
cd ~ # Go to your home directory (~ is a shortcut for it)
cd .. # Go up one level
cd - # Go back to the previous directory (like the back button)
mkdir projects # Create a directory called "projects"
mkdir -p /opt/myapp/logs/archive # Create the full path even if parent dirs don't exist
# Without -p this would fail if /opt/myapp doesn't exist yet
touch notes.txt # Create an empty file (or update its timestamp if it exists)
cp notes.txt /tmp/ # Copy a file to /tmp/
cp -r /opt/myapp /opt/myapp-backup # Copy a directory and everything in it (-r = recursive)
mv notes.txt renamed.txt # Rename a file
mv notes.txt /home/weston/docs/ # Move a file to a different directory
rm notes.txt # Delete a file
rm -rf /opt/myapp # Delete a directory and ALL its contents with no confirmation
rm -rf / deletes everything on your system. There is no undo, no recycle bin. Linux will do exactly what you tell it without asking. Always double-check your path before running rm -rf. A misplaced space can be catastrophic. For example: rm -rf /opt /myapp (with a space) deletes /opt entirely AND tries to delete /myapp. That extra space just destroyed your /opt directory.
cat /etc/hostname # Print entire file to the screen
# Good for small files. Terrible for large ones.
less /var/log/syslog # View large files with scrolling
# Arrow keys or Page Up/Down to scroll, q to quit
# / to search, n for next match
head -20 /var/log/syslog # First 20 lines only
tail -50 /var/log/syslog # Last 50 lines only
tail -f /var/log/syslog # Follow the file in real time as new lines are added
# This is how you watch logs live. Ctrl+C to stop.
# grep searches inside files (or input) for a pattern
grep "Failed password" /var/log/auth.log # Find failed SSH login attempts
grep -i "error" /var/log/syslog # -i = case insensitive
grep -r "password" /etc/ # -r = search recursively through a directory
grep -v "INFO" /var/log/app.log # -v = show lines that do NOT match (invert)
grep -n "root" /etc/passwd # -n = show line numbers
# find searches for files by name, type, permissions, date, size
find / -name "sshd_config" # Find a file by name anywhere on the system
find /var/log -name "*.log" -mtime -1 # Log files modified in the last 24 hours
find / -perm -4000 -type f 2>/dev/null # Find all SUID files (privilege escalation targets)
find /tmp -type f -newer /tmp # Files created recently in /tmp
# The pipe | takes output from one command and feeds it into another
cat /etc/passwd | grep weston # Find user "weston" in passwd file
ps aux | grep nginx # Is nginx running?
ls -la /tmp | sort -k5 -n # Sort files in /tmp by size
# Redirect output to a file
echo "backup complete" > /tmp/status.txt # Write (overwrites existing content)
echo "another line" >> /tmp/status.txt # Append (adds to existing content)
command 2>/dev/null # Discard error messages
command > output.txt 2>&1 # Capture both normal and error output to file
uname -a # Kernel version, architecture, hostname
cat /etc/os-release # Distro name and version (works on all distros)
hostname # Machine name
whoami # Current username
id # Username, UID number, and group memberships
uptime # How long has this machine been running?
df -h # Disk space for each filesystem, human-readable
du -sh /var/log/ # How much space is the /var/log directory using?
free -h # RAM and swap usage
top # Live process monitor, CPU and RAM usage per process
# Press q to quit, k to kill a process by PID
ps aux # Snapshot of all running processes
history # Every command you have typed in this terminal session
history | grep ssh # Find previous SSH commands you ran
Linux permissions are one of the most fundamental concepts in the entire OS. They control who can read, write, and execute every single file. Getting them wrong is one of the most common security mistakes I see — both overconfiguring (too restrictive, things break) and underconfiguring (too permissive, everything is exposed).
Think of each file like a document in an office. Permissions are a lock system. The owner is the person who created the document. The group is the department they belong to. Others is everyone else in the company. You can set: can this person read it? Can they edit it? Can they run it if it is a program? For the owner, group, and others independently.
# Run ls -la and you will see something like this:
-rwxr-xr-- 1 weston developers 4096 Jan 15 10:30 deploy.sh
# Breaking it down character by character:
# - = file type (- = regular file, d = directory, l = symbolic link)
# rwx = owner permissions (weston can Read, Write, eXecute)
# r-x = group permissions (developers can Read, not Write, can eXecute)
# r-- = others permissions (everyone else can only Read)
# 1 = number of hard links (usually not important)
# weston = the owner
# developers = the group
# 4096 = file size in bytes
# r = read (value: 4) — can you open and read the contents?
# w = write (value: 2) — can you modify or delete it?
# x = execute (value: 1) — can you run it as a program?
# - = no permission
There are two ways to use chmod — numeric (octal) and symbolic. Both do the same thing. Numeric is faster once you know it. Symbolic is more readable.
# ---- NUMERIC METHOD ----
# Add up the values: r=4, w=2, x=1. One number per group (owner, group, others).
# rwx = 4+2+1 = 7
# r-x = 4+0+1 = 5
# r-- = 4+0+0 = 4
# --- = 0+0+0 = 0
chmod 755 script.sh # owner: rwx(7), group: r-x(5), others: r-x(5)
# Use for scripts you want anyone to run but only you to edit
chmod 644 config.txt # owner: rw-(6), group: r--(4), others: r--(4)
# Use for regular files — owner edits, others read
chmod 600 id_rsa # owner: rw-(6), nobody else: nothing
# Use for SSH private keys — SSH REFUSES to use keys that are too permissive
chmod 700 ~/.ssh/ # Only owner can access this directory
chmod 777 /tmp/shared/ # Everyone can do everything — almost never use this
# ---- SYMBOLIC METHOD ----
# u = user/owner, g = group, o = others, a = all three
# + adds permission, - removes it, = sets it exactly
chmod u+x script.sh # Add execute for the owner
chmod g-w file.txt # Remove write from the group
chmod o+r report.pdf # Add read for others
chmod a+x run.sh # Add execute for everyone
chmod u+x,g-w file.sh # Multiple changes at once
# Apply recursively to a directory and all its contents
chmod -R 755 /var/www/html/
chown weston file.txt # Change the owner to weston
chown weston:developers file.txt # Change owner AND group
chown :developers file.txt # Change group only (note the colon)
chown -R www-data:www-data /var/www/ # Recursive — important for web server files
# www-data is the user nginx/apache runs as
400 # SSH private keys on very locked-down systems (owner read-only)
600 # SSH private keys, config files containing passwords
644 # Normal files — owner edits, everyone reads
664 # Group-collaborative files where the group also needs to write
755 # Scripts, executables, directories that are publicly accessible
750 # Scripts that only the group should execute
700 # Private directories — nobody else even gets to look inside
The most common permission mistake I see in the field is web application files being owned by root with world-readable permissions on the config files. That means the database password sitting in config.php or .env is readable by the web server process, which an attacker who exploits the web app can now read. Config files with credentials should be 600 or 640 at most, owned by the application user, not root.
Regularly run this to find files anyone can write to on your system: find / -perm -o+w -type f 2>/dev/null. In a well-configured system this list should be short and every item should have a clear reason. World-writable files in unusual locations are a flag for either misconfiguration or an attacker having already modified something.
Linux has a root account — equivalent to Windows' built-in Administrator, but more powerful. Root can do literally anything. Delete the entire OS. Read any file. Kill any process. This is why you almost never log in as root directly. Instead, you use sudo.
Root is like a master override key that opens every door and disables every alarm. You keep that key in a safe. When you need to do something that requires it, you use sudo — which is like a supervised checkout process. You identify yourself (enter your password), sudo confirms you are authorized, runs the one command as root, and then the key goes back in the safe. You are never "logged in as root" for longer than it takes to run that one command.
sudo command # Run a single command as root
sudo apt update # Example: update package lists (requires root)
sudo -i # Open a full root shell
# Your prompt changes to # instead of $
# Everything you type now runs as root
# Type "exit" to leave the root shell
sudo -u weston command # Run a command as a different user (not root)
sudo !! # Re-run the previous command with sudo
# Extremely useful when you forget sudo on a command
sudo -l # What commands are you allowed to run with sudo?
# (ALL) means everything — you are a full sudo user
# Create a user with a home directory and a bash shell
sudo useradd -m -s /bin/bash weston
# -m = create home directory at /home/weston
# -s /bin/bash = set bash as their shell (otherwise they might get no shell at all)
# Set their password (you will be prompted to type it twice)
sudo passwd weston
# Ubuntu has a friendlier command that does all the above interactively
sudo adduser weston
# Delete a user and their home directory
sudo userdel -r weston
# -r removes their home directory. Without -r, the home directory stays
# and is owned by a non-existent user UID. Clean up properly.
# Modify an existing user
sudo usermod -aG sudo weston # Add to sudo group (Ubuntu/Debian)
sudo usermod -aG wheel weston # Add to wheel group (RHEL/Rocky — same effect)
sudo usermod -s /bin/bash weston # Change shell
sudo usermod -L weston # Lock the account (they cannot log in)
sudo usermod -U weston # Unlock the account
# Groups
sudo groupadd developers
sudo gpasswd -a weston developers # Add weston to developers group
sudo gpasswd -d weston developers # Remove weston from developers group
groups weston # What groups is weston in?
# ALWAYS edit sudoers with visudo — it validates syntax before saving
# A syntax error in /etc/sudoers can lock you out of the whole system
sudo visudo
# Common entries you will see and write:
weston ALL=(ALL:ALL) ALL
# weston can run any command as any user from any machine. Full sudo.
weston ALL=(ALL) NOPASSWD: ALL
# Same but does not ask for a password. Used in automation but risky.
weston ALL=(ALL) /usr/bin/systemctl, /usr/bin/journalctl
# weston can ONLY run systemctl and journalctl with sudo. Nothing else.
# This is the principle of least privilege — give only what is needed.
%developers ALL=(ALL) ALL
# The % means this applies to a group. Everyone in the developers group gets full sudo.
On production servers I often see sudoers configured as NOPASSWD for service accounts that run automation scripts. That is sometimes necessary — you cannot have a script pause and wait for a password input at 3am. But it means if that service account is compromised, the attacker has full sudo with no friction. Audit your sudoers file. Know exactly who has NOPASSWD and why. Every entry should have a documented reason.
Shell scripts are text files containing commands that run in sequence. They are how you automate repetitive tasks, schedule maintenance, deploy applications, and save yourself from typing the same ten commands every morning. Every Linux admin writes scripts constantly.
# Open a text editor and create a new file
nano myscript.sh
# (nano is simple, built-in to most distros. vim is more powerful but has a learning curve.)
# ALWAYS start with a shebang — tells Linux which interpreter to use
#!/bin/bash
# Without this, Linux does not know it is a bash script.
# It might try to run it as a sh script, or fail entirely.
echo "Hello. This is running as: $(whoami)"
echo "Current directory: $(pwd)"
echo "Today is: $(date)"
# In nano: Ctrl+X to exit, Y to confirm save, Enter to keep filename
# The file exists but is NOT executable yet
ls -la myscript.sh
# Output: -rw-r--r-- 1 weston weston 78 Jan 15 10:00 myscript.sh
# No x in the permissions — Linux will not run it as a program yet
# Make it executable
chmod +x myscript.sh
# Now run it (the ./ means "in the current directory")
./myscript.sh
# You can also always run it by specifying the interpreter directly
# This works even without execute permission
bash myscript.sh
#!/bin/bash
echo "===== System Health Check: $(date) ====="
echo ""
echo "--- Hostname ---"
hostname
echo ""
echo "--- Disk Usage ---"
df -h | grep -v tmpfs # Show disk usage, skip virtual filesystems
echo ""
echo "--- Memory Usage ---"
free -h
echo ""
echo "--- Top 5 CPU Processes ---"
ps aux --sort=-%cpu | head -6 # Sort by CPU, show top 5 (head -6 includes the header)
echo ""
echo "--- Last 10 Failed SSH Logins ---"
grep "Failed password" /var/log/auth.log | tail -10
#!/bin/bash
# Variables — no spaces around the = sign
LOGDIR="/var/log/myapp"
MAXDAYS=30
# Use variables with $
echo "Cleaning logs older than $MAXDAYS days from $LOGDIR"
# If/else — check if a directory exists
if [ -d "$LOGDIR" ]; then
echo "Log directory exists."
else
echo "Log directory not found. Creating it..."
mkdir -p "$LOGDIR"
fi
# -d checks if it is a directory
# -f checks if it is a file
# -z checks if a string is empty
# Always quote variables in [ ] to handle spaces in paths safely
# Loop through files
for logfile in /var/log/*.log; do
echo "Processing: $logfile"
# Do something with each file
done
# Loop with a counter
for i in {1..5}; do
echo "Attempt $i..."
done
Network troubleshooting is a daily activity for any IT person or security professional. These commands are your toolkit for diagnosing what is connected, what is listening, where packets are going, and what is reachable from this machine.
# Modern way — ip command (use this)
ip addr show # All interfaces with their IP addresses
ip a # Shorthand for the above
ip addr show eth0 # Just the eth0 interface
# Older way — ifconfig (still works, install net-tools if not found)
ifconfig
# The key things to look for in ip addr output:
# inet 192.168.1.50/24 = this interface has IP 192.168.1.50 on a /24 network
# UP = interface is enabled
# NO-CARRIER = cable unplugged or VM adapter not connected
# link/ether = MAC address
# Routing table — where does traffic go?
ip route show # Shows default gateway and network routes
ip r # Shorthand
# "default via 192.168.1.1" means all traffic with no more specific route
# goes to 192.168.1.1 (your router/gateway)
ping 8.8.8.8 # Can I reach Google's DNS? Tests basic internet
ping -c 4 google.com # Send exactly 4 packets then stop (-c = count)
ping -c 3 192.168.1.1 # Can I reach my gateway?
traceroute google.com # Trace the path packets take to reach a destination
# Each line is a router. Where it stops = where the problem is.
# Install on Ubuntu: sudo apt install traceroute
mtr google.com # Live traceroute that keeps refreshing
# Shows packet loss at each hop. Better than traceroute.
# Install: sudo apt install mtr
nslookup google.com # Basic DNS lookup — what IP does google.com resolve to?
nslookup google.com 8.8.8.8 # Ask a specific DNS server
dig google.com # More detailed DNS lookup
dig google.com MX # Look up mail (MX) records
dig @192.168.1.10 server.local # Query your internal DNS server specifically
cat /etc/resolv.conf # What DNS servers is this machine configured to use?
# If this points to 8.8.8.8 instead of your internal DC,
# internal hostnames will not resolve.
# ss is the modern replacement for netstat — use this
ss -tulnp
# -t = TCP connections
# -u = UDP connections
# -l = only listening ports (services waiting for connections)
# -n = show numbers, not names (faster)
# -p = show which process owns each connection
ss -tulnp | grep :22 # Is SSH listening?
ss -tulnp | grep :80 # Is a web server listening on port 80?
ss -anp # All connections including established ones
# Test if a port is open on another machine
nc -zv 192.168.1.10 22 # Is port 22 (SSH) reachable on that host?
nc -zv 192.168.1.10 80 # Is port 80 (HTTP) open?
# "succeeded" = open. "refused" = closed. Timeout = firewall blocking it.
# HTTP testing
curl -I https://google.com # Fetch just the HTTP headers
curl -o /dev/null https://google.com # Download but discard output (just test connectivity)
wget https://example.com/file.tar.gz # Download a file
When a service is not reachable from outside the machine, I always run ss -tulnp first. It tells me immediately: is the service even listening? Is it listening on all interfaces (0.0.0.0 or *) or only on localhost (127.0.0.1)? A service listening only on 127.0.0.1 is only reachable from the machine itself — not from the network. That is the cause of countless "why can't I connect?" support tickets.
How you change IP addresses depends on the distro and whether you want the change to survive a reboot. Temporary changes are good for testing. Permanent changes go in configuration files.
# Add an IP address to an interface
sudo ip addr add 192.168.1.50/24 dev eth0
# Remove an IP address
sudo ip addr del 192.168.1.50/24 dev eth0
# Set the default gateway
sudo ip route add default via 192.168.1.1
# Remove the default gateway
sudo ip route del default
# Bring an interface down and back up
sudo ip link set eth0 down
sudo ip link set eth0 up
Ubuntu uses Netplan for network configuration. The config files live in /etc/netplan/ and are written in YAML format. Indentation matters in YAML — use spaces, never tabs.
# Find your config file (usually named something like 00-installer-config.yaml)
ls /etc/netplan/
# Edit it
sudo nano /etc/netplan/00-installer-config.yaml
# Static IP configuration:
network:
version: 2
ethernets:
eth0: # Replace with your interface name
dhcp4: no
addresses:
- 192.168.1.50/24
routes:
- to: default
via: 192.168.1.1
nameservers:
addresses: [192.168.1.10, 8.8.8.8]
# Apply the changes (takes effect immediately, no reboot needed)
sudo netplan apply
# Test it first without permanently applying (reverts after 120 seconds if you don't confirm)
sudo netplan try
# For DHCP instead, it is much simpler:
network:
version: 2
ethernets:
eth0:
dhcp4: yes
# List all network connections
nmcli connection show
# Set static IP (replace "Wired connection 1" with your connection name from above)
sudo nmcli con mod "Wired connection 1" ipv4.addresses 192.168.1.50/24
sudo nmcli con mod "Wired connection 1" ipv4.gateway 192.168.1.1
sudo nmcli con mod "Wired connection 1" ipv4.dns "192.168.1.10 8.8.8.8"
sudo nmcli con mod "Wired connection 1" ipv4.method manual
# Apply
sudo nmcli con up "Wired connection 1"
# Switch back to DHCP
sudo nmcli con mod "Wired connection 1" ipv4.method auto
sudo nmcli con up "Wired connection 1"
# Restart the whole network service
sudo systemctl restart NetworkManager
If you are connected to a machine via SSH and you change its IP address, your SSH session will drop immediately when the change takes effect. You will be locked out. Always have console access (or a backup plan) before changing network config on a remote machine. I have locked myself out of remote machines doing exactly this. It is not fun.
A service is a program that runs in the background, usually starting automatically when the machine boots. Your SSH server is a service. Your web server is a service. Your DHCP client is a service. systemd is the system that manages all of them on every modern Linux distro.
systemd is like a hotel manager. It decides which services start when the hotel opens (boot), makes sure they are running properly, restarts them if they crash, and shuts them down properly when the hotel closes (shutdown). You give orders to the hotel manager (systemctl), not directly to each service.
sudo systemctl start nginx # Start the service right now
sudo systemctl stop nginx # Stop it
sudo systemctl restart nginx # Stop then start (applies config changes)
sudo systemctl reload nginx # Reload config without full restart (not all services support this)
sudo systemctl status nginx # Is it running? Show recent log output.
# Green "active (running)" = good. Red "failed" = problem.
sudo systemctl enable nginx # Start automatically on boot
sudo systemctl disable nginx # Do NOT start on boot
sudo systemctl is-active nginx # Quick check: outputs "active" or "inactive"
sudo systemctl is-enabled nginx # Is it set to start on boot?
# List all services
systemctl list-units --type=service # All service units
systemctl list-units --type=service --state=running # Only running ones
systemctl list-units --type=service --state=failed # Failed services — check these
# journald captures all output from all services
sudo journalctl -u nginx # All logs for nginx ever
sudo journalctl -u nginx -f # Follow nginx logs live
sudo journalctl -u nginx -n 50 # Last 50 lines
sudo journalctl -u nginx --since "1 hour ago" # Last hour only
sudo journalctl -u nginx --since "2024-01-15 10:00" --until "2024-01-15 11:00"
sudo journalctl -p err # Only errors across all services
sudo journalctl -p warning # Warnings and above
sudo journalctl --since today # Everything from today
sudo journalctl -b # Everything since last boot
When a service fails to start, systemctl status servicename gives you the last few log lines immediately. That is usually enough to diagnose the problem. If not, follow up with journalctl -u servicename -n 100 to get more context. Nine times out of ten the error is clearly stated: wrong config file syntax, missing dependency, port already in use, wrong file permissions. Read the error message before assuming it is complicated.
Installing software on Linux is not downloading an .exe and clicking Next. You install packages from repositories — curated collections of software maintained by the distro. The package manager handles downloading, installing, dependency resolution, and updates. It is cleaner and safer than the Windows way.
A package repository is like an app store, but run by the OS vendor and with strict vetting. You do not go to random websites and download installers. You say "I want nginx" and the package manager fetches the official version, checks its signature to verify it has not been tampered with, and installs it along with everything it depends on.
# Update the package list (always do this before installing anything)
sudo apt update
# This downloads the latest list of available packages from the repositories.
# It does NOT install anything yet. Just refreshes the catalog.
# Upgrade installed packages to their latest versions
sudo apt upgrade
# Do both at once
sudo apt update && sudo apt upgrade -y
# -y automatically answers "yes" to all prompts
# Install software
sudo apt install nginx
sudo apt install nginx curl htop vim -y # Multiple packages at once
# Remove software
sudo apt remove nginx # Removes the package, keeps config files
sudo apt purge nginx # Removes package AND config files (cleaner)
sudo apt autoremove # Remove packages installed as dependencies
# that are no longer needed by anything
# Search for packages
apt search nginx # Find packages related to nginx
apt show nginx # Info about a specific package
# Check if something is installed
dpkg -l | grep nginx
dpkg -l nginx # Check a specific package
sudo dnf check-update # Check for available updates
sudo dnf upgrade -y # Upgrade everything
sudo dnf install nginx -y # Install
sudo dnf remove nginx # Remove
sudo dnf search nginx # Search
sudo dnf info nginx # Package details
# Check if installed
rpm -qa | grep nginx
rpm -q nginx # Specific package
# List all installed packages
rpm -qa | sort
rsyslog is the logging daemon running on almost every Linux system. It collects log messages from the kernel and from applications, and writes them to files under /var/log/. More importantly for security work, it can forward those logs in real time over the network to a central SIEM. This is how you get Linux machines visible in your detection stack.
rsyslog is like a postal service for log messages. Every application in Linux can drop a message at the post office (the syslog socket). rsyslog picks it up, decides where it belongs based on rules you configure, and either delivers it to a local file or sends it across the network to another post office (your SIEM). You write the delivery rules — rsyslog handles the routing.
/var/log/syslog # General system log on Ubuntu/Debian
# Almost everything ends up here
/var/log/messages # Same thing but on RHEL/Rocky
/var/log/auth.log # Authentication events on Ubuntu/Debian
# SSH logins, sudo usage, user switches — all here
/var/log/secure # Authentication events on RHEL/Rocky (same content, different path)
/var/log/kern.log # Kernel messages
/var/log/cron # Cron job execution log
/var/log/nginx/ # nginx access.log and error.log
/var/log/apache2/ # Apache logs
/var/log/audit/audit.log # auditd log (if installed — covers syscall-level activity)
The most valuable log for security monitoring on Linux is auth.log (or secure on RHEL). Every SSH login attempt — successful or failed — appears here with the source IP, username, and timestamp. If someone is brute-forcing your SSH server, you will see hundreds of "Failed password" lines here.
# Watch auth.log live
sudo tail -f /var/log/auth.log
# Find all failed SSH attempts and count them by source IP
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -rn
# This command is genuinely useful. It finds all the source IPs of failed
# SSH attempts and shows you the top offenders. Run it on any internet-facing server.
# Find successful SSH logins
grep "Accepted password\|Accepted publickey" /var/log/auth.log
# Send a test log message manually
logger -p auth.info "Test message from $(hostname)"
The config file for rsyslog is /etc/rsyslog.conf plus any files in /etc/rsyslog.d/. Best practice is to create a new file in /etc/rsyslog.d/ rather than editing the main config directly.
# Create a forwarding config file
sudo nano /etc/rsyslog.d/50-siem-forward.conf
# Forward everything via UDP to your SIEM (port 514)
*.* @192.168.1.100:514
# @ = UDP. Fast but no delivery guarantee. Can lose messages under load.
# Forward via TCP (recommended for security logs)
*.* @@192.168.1.100:514
# @@ = TCP. Reliable delivery. Use this.
# Forward only authentication logs
auth,authpriv.* @@192.168.1.100:514
# Forward with hostname included in the message (important when multiple
# machines send to the same SIEM — you need to know which machine sent it)
$template SIEM_Format,"%HOSTNAME% %syslogtag%%msg%\n"
*.* @@192.168.1.100:514;SIEM_Format
# Apply changes
sudo systemctl restart rsyslog
# Verify the config has no errors
sudo rsyslogd -N1
# If your SIEM is Wazuh, use the agent instead of raw syslog forwarding.
# The agent is smarter — it parses logs locally before sending,
# monitors file integrity, and runs active responses.
# Ubuntu/Debian
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --dearmor | sudo tee /usr/share/keyrings/wazuh.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | sudo tee /etc/apt/sources.list.d/wazuh.list
sudo apt update
sudo WAZUH_MANAGER='192.168.1.100' apt install wazuh-agent
sudo systemctl enable --now wazuh-agent
# Rocky Linux/RHEL
sudo rpm --import https://packages.wazuh.com/key/GPG-KEY-WAZUH
cat > /tmp/wazuh.repo << 'EOF'
[wazuh]
name=Wazuh
baseurl=https://packages.wazuh.com/4.x/yum/
enabled=1
gpgcheck=1
gpgkey=https://packages.wazuh.com/key/GPG-KEY-WAZUH
EOF
sudo mv /tmp/wazuh.repo /etc/yum.repos.d/wazuh.repo
sudo WAZUH_MANAGER='192.168.1.100' dnf install wazuh-agent
sudo systemctl enable --now wazuh-agent
When something goes wrong on a Linux server and you need to investigate, the first 60 seconds should look like this: sudo tail -100 /var/log/auth.log to see recent authentication. sudo journalctl -p err --since "1 hour ago" for recent errors across all services. ss -tulnp to see what is listening. ps aux to see what is running. In under a minute you have enough to know whether this is a crashed service, an unauthorized access, or a configuration problem. Start with logs. Always.