Command Line Mastery
Basics, Customization, Hotkeys & Vim
Built-In Guides - Man, Help, Apropos, Info: List files and directories: Change directory: Create a directory: Remove a file/directory: Copy a file/directory: Move or rename a file/directory: View file contents: Basic command-line commands are used to navigate the file system, manage files and directories, and view and manipulate file contents. Commands like ls, cd, pwd, and echo are used to list files, change directories, display the current working directory, and print system variables. Commands like mkdir, rm, cp, and mv are used to create directories, remove files and directories, copy files, and move or rename files and directories. Viewing file contents and searching for patterns in files can be done using commands like cat, less, head, tail, and grep.
man command # Show manual for a command, e.g. man ls
apropos keyword # Search for commands by keyword, e.g. apropos nano
apropos "network configuration" # Search for commands by phrase
info command # Show information about a command
help command # Show help for a built-in command
ls # List files and directories
ls -l # List files with details
ls -a # List all files, including hidden files
ls -lh # List files with human-readable sizes
pwd # Show the current directory
cd directory # Change to a specific directory
cd .. # Move up one directory
cd ~ # Change to the home directory
cd - # Change to the previous directory
Create a file: Edit a file: View file permissions: Change file permissions: Change file ownership: File operations are common tasks performed on files and directories in a Linux environment. Creating files, editing files, viewing and changing file permissions, and finding files by name, type, or size are essential operations for managing files and directories. Commands like touch, nano, vi, ls, chmod, chown, and find are used to create, edit, view, and manipulate files and directories on a Linux system.
Display text: Concatenate files: View text file: Search for text in files: Count lines, words, and characters: Sort lines in a file: Remove duplicate lines: Text operations involve working with text files, searching for patterns, counting lines, words, and characters, and manipulating text data. Commands like echo, cat, less, grep, wc, sort, and uniq are used to display text, concatenate files, view text files, search for patterns, count text data, sort lines, and remove duplicate lines. Text operations are essential for analyzing and processing text data on a Linux system.
Print working directory, variables, and path: Create aliases for commands: View and remove aliases: Save aliases permanently: History Customization Aliases Example .bashrc Configuration Terminal customization allows you to personalize the appearance and behavior of the command-line interface. Customizing the terminal prompt with PS1, creating aliases for frequently used commands, viewing and removing aliases, and saving aliases permanently in configuration files like .bashrc are common ways to enhance the command-line experience. Terminal customization and aliases help improve productivity and efficiency when working in a terminal environment.
echo $PATH # Show the system path
echo $HOME # Show the home directory
echo $USER # Show the current user
printenv # Display all environment variables
getenv variable # Display a specific environment variable
getent passwd username # Display user information
getent group groupname # Display group information
alias ll="ls -l" # Create an alias for ls -l
alias ..="cd .." # Create an alias for cd ..
alias c="clear" # Create an alias for clear
export c="clear" # Set an alias using export
export EDITOR=nano # Set the default text editor
export PATH=$PATH:/path/to/directory # Add a directory to the system path
history # Show command history
history n # Show last n commands
history -c # Clear command history
history -d n # Delete command at position n
history -a # Append history to history file
history -r # Read history from history file
vim ~/.bashrc # To edit .bashrc file
# Custom prompt
PS1="custom_prompt" # Set a custom prompt e.g. PS1="DOCKER_SERVER $ " # displays DOCKER_SERVER $ as the prompt
PS1="\[\e[1;32m\]\u@\h \w\[\e[m\] $ " # will display username@hostname current_directory $
# Aliases
alias ll="ls -l" # List files with details via ll
alias ..="cd .." # Move up one directory via ..
alias c="clear" # Clear the terminal via c
# History settings - HIST Variables
HISTFILE=~/.bash_history # Set history file location
HISTSIZE=1000 # Set history size limit
HISTFILESIZE=1000 # Set history file size limit
HISTCONTROL=ignoredups # Ignore duplicate commands
HISTIGNORE="ls:cd:exit" # Ignore specific commands
HISTTIMEFORMAT="%F %T " # Show timestamp in history
HISTCMD # Show history command number
# Custom functions
function greet() { echo "Hello, $1!"; } # Define a custom function, use greet name to execute
# Save and exit .bashrc
source ~/.bashrc # Reload .bashrc and apply changes to the current session
Vim Cheat Sheet Commands/Hotkeys Navigation Editing Search and Replace Vim is a powerful text editor that provides different modes for editing text, navigating files, and performing search and replace operations. Vim modes include insert mode for entering text, normal mode for navigation and editing, and command mode for executing commands. Vim navigation commands like h, j, k, l, w, b, e, 0, $, gg, and G are used to move the cursor within a file. Vim editing commands like x, dd, dw, u, Ctrl + r, and p are used to delete, copy, undo, redo, and paste text. Vim search and replace commands like /pattern, n, N, and :%s/old/new/g are used to search for patterns and replace text in files.
# Vim Navigation
h # Move left
j # Move down
k # Move up
l # Move right
w # Move to the beginning of the next word
b # Move to the beginning of the previous word
e # Move to the end of the word
0 # Move to the beginning of the line
$ # Move to the end of the line
gg # Move to the beginning of the file
G # Move to the end of the file
Vim Advanced Commands Vim Advanced Navigation Vim Advanced Editing Vim Advanced Search and Replace Vim advanced commands provide additional functionality for editing text, navigating files, and performing search and replace operations. Advanced Vim commands like :set, :r, :g, :v, and :%s are used to configure settings, insert file contents, delete lines based on patterns, and replace text in files. Advanced Vim navigation commands like Ctrl + f, Ctrl + b, Ctrl + d, Ctrl + u, Ctrl + e, Ctrl + y, H, M, and L are used to scroll pages and move the cursor within a file. Advanced Vim editing commands like . are used to repeat the last change, insert command output, insert text, and add text at the beginning or end of lines or files.
# Vim Advanced
>> # Shift text right one level
<< # Shift text left one level
:set number # Show line numbers
:set nonumber # Hide line numbers
:set list # Show hidden characters
:set nolist # Hide hidden characters
:set linebreak # Wrap long lines
:set wrap # Enable line wrapping
:set nowrap # Disable line wrapping
:set syntax=python # Set syntax highlighting for Python
:set autoindent # Enable auto-indent
:set noautoindent # Disable auto-indent
:set tabstop=4 # Set tab size to 4 spaces
:set shiftwidth=4 # Set indentation width to 4 spaces
:set expandtab # Convert tabs to spaces
:set noexpandtab # Use tabs instead of spaces
:set hlsearch # Highlight search results
:set nohlsearch # Disable search result highlighting
:set incsearch # Incremental search
:set noincsearch # Disable incremental search
:set ignorecase # Ignore case in search
:set noignorecase # Case-sensitive search
:set mouse=a # Enable mouse support
:set nomouse # Disable mouse support
:set ruler # Show cursor position
:set noruler # Hide cursor position
:set background=dark # Set dark background
:set background=light # Set light background
# Vim Advanced Navigation
Ctrl + f # Page down
Ctrl + b # Page up
Ctrl + d # Half page down
Ctrl + u # Half page up
Ctrl + e # Scroll line up
Ctrl + y # Scroll line down
H # Move to top of screen
M # Move to middle of screen
L # Move to bottom of screen
# Vim Advanced Editing
. # Repeat last change
:r filename # Insert file contents
:r !ls # Insert command output
:r !date # Insert date
:r !echo "text" # Insert text
:r !echo "text" | sort # Insert sorted text
:r !echo "text" | sort -r # Insert reverse sorted text
# Vim Advanced Search and Replace
:g/pattern/d # Delete lines with pattern
:v/pattern/d # Delete lines without pattern
:%s/old/new/gc # Replace all occurrences with confirmation
:%s/^/text/ # Add text at the beginning of each line
:%s/$/text/ # Add text at the end of each line
:%s/^/text/g # Add text at the beginning of the file
:%s/$/text/g # Add text at the end of the file
Terminal Hotkeys Terminal Navigation Terminal History Terminal hotkeys are keyboard shortcuts that provide quick access to common commands and operations in a terminal environment. Hotkeys like Ctrl + a, Ctrl + e, Ctrl + u, Ctrl + k, Ctrl + w, Ctrl + y, Ctrl + l, Ctrl + r, Ctrl + c, Ctrl + z, and Ctrl + d are used for moving the cursor, deleting text, pasting text, clearing the screen, searching command history, terminating processes, suspending processes, and exiting the shell. Terminal navigation hotkeys like Alt + f, Alt + b, Alt + d, Alt + u, Alt + l, Alt + c, and Alt + t are used for moving the cursor, deleting words, changing word case, and swapping characters. Terminal history hotkeys like !n, !!, !string, !?string, and !-n are used to repeat commands by number, repeat the last command, repeat commands starting with a string, repeat commands containing a string, and repeat the nth last command.
Ctrl + a # Move to the beginning of the line
Ctrl + e # Move to the end of the line
Ctrl + u # Delete from the cursor to the beginning of the line
Ctrl + k # Delete from the cursor to the end of the line
Ctrl + w # Delete the word before the cursor
Ctrl + y # Paste the last deleted text
Ctrl + l # Clear the screen
Ctrl + r # Search command history
Ctrl + c # Terminate the current process
Ctrl + z # Suspend the current process
Ctrl + d # Exit the shell
Advanced Search Operations
Find files by name: Find files by type: Find files by size: Find files by permissions: Find files by owner: Find files by group: Find files by modified time: Advanced Find Operations: Advanced search operations allow you to find files based on specific criteria such as name, type, size, permissions, owner, group, and modified time. The find command is a powerful tool for locating files and directories on a Linux system. By combining different options and arguments with the find command, you can search for files that meet your specific requirements.
find /path -name filename # Find files by name
find / -iname "filename" # Find files by name (case-insensitive), starting from root dir
find . -name "*.txt" # Find all .txt files starting from current dir
find / -type f -iname "*.txt" # Find all .txt files
find /path -type f -name ".*" # Find hidden files
find /path -type f # Find regular files
find /path -type d # Find directories
find / -type l # Find symbolic links, e.g. shortcuts
find / -type s # Find sockets, such as network connections
find / -type p # Find named pipes, such as FIFOs
find / -type c # Find character devices, such as terminals
find / -type b # Find block devices, such as hard drives
find / -type f -name "*.log" # Find files with a specific extension
find /path -size +10M # Find files larger than 10MB
find /path -size -1G # Find files smaller than 1GB
find /path -mtime -1 # Find files modified in the last day
find /path -mtime +7 # Find files modified more than 7 days ago
find /path -name "*.txt" -type f -size +1M -perm 644 -user username -group groupname -mtime -30 # Find .txt files larger than 1MB with specific permissions, owner, group, and modified time
find / -iname "file*.txt" -type f -size +10M -perm 644 -user username -group groupname -mtime -365 # Find case-insensitive .txt files larger than 10MB with specific permissions, owner, group, and modified time
find /path -type f -name "*.log" -exec rm {} \; # Find and remove .log files
find . -type f -name "*.txt" -exec cp {} /path/to/destination \; # Find and copy .txt files to a destination
find /path -type f -name "*.log" -exec grep "pattern" {} \; # Find files with a specific pattern
find / -type f -iname "*.sh" | grep -v "backup" # Find .sh files excluding those with "backup" in the name, -v for invert match
find /path -type f -name "*.log" -delete # Delete files with a specific extension
find / -type f -name "*.log" -print0 | xargs -0 rm # Delete files with a specific extension using xargs, xargs is used to build and execute command lines from standard input
Search for text in files: Search for text in files with context: Search for text in files by type: Search for text in files recursively: The grep command is used to search for text patterns in files. By specifying a pattern and a file or directory, you can search for specific text strings within files. Options like -i for case-insensitive search, -w for whole word search, -n for line numbers, -v for invert match, and -c for counting lines with matching text provide additional functionality for text searching. The grep command can be used to search for text in specific file types, search recursively in directories, and display context around matching lines.
grep pattern file # Search for text in a file
grep -r pattern /path # Search for text in files in a directory
grep -i pattern file # Search for text in a file (case-insensitive)
grep -w pattern file # Search for whole words in a file
grep -n pattern file # Show line numbers with matching text
grep -v pattern file # Invert match, show lines that don't match
grep -c pattern file # Count lines with matching text
grep -A 3 pattern file # Show 3 lines after the matching line, -A for after
grep -B 2 pattern file # Show 2 lines before the matching line, -B for before
grep -C 1 pattern file # Show 1 line before and after the matching line, -C for context
grep pattern *.txt # Search for text in .txt files
grep pattern *.log # Search for text in .log files
grep pattern *.sh # Search for text in .sh files, will search within the current directory
grep -r pattern /path # Search for text in files in a directory and its subdirectories
grep -ri pattern /path # Search for text in files in a directory and its subdirectories (case-insensitive)
grep -ri "pattern" /path/*.txt # Search for text in .txt files in a directory (case-insensitive)
grep -ri "pattern" /*.txt # Search for text in .txt files in all directories (starting from root dir)
Find files and search for text in them: Find files and count lines with specific text: Find files and display context around text: Find files and display line numbers with text: Other Examples: Combining the find and grep commands allows you to search for text patterns in files across directories. By using find to locate files based on specific criteria and grep to search for text patterns in those files, you can perform advanced text searching and analysis tasks. By combining find and grep with xargs or -exec, you can create powerful text processing workflows to search for text in files and directories.
find /path -type f -name "*.txt" | xargs grep "pattern" # Find .txt files and search for a pattern in them
find /path -type f -name "*.log" -exec grep "pattern" {} \; # Find .log files and search for a pattern in them
find /path -type f -name "*.txt" | xargs grep -c "pattern" # Find .txt files and count lines with a pattern, -c for count
find /path -type f -name "*.log" -exec grep -c "pattern" {} \; # Find .log files and count lines with a pattern
find /path -type f -name "*.txt" | xargs grep -A 3 "pattern" # Find .txt files and show context after the matching line
find /path -type f -name "*.log" -exec grep -B 2 "pattern" {} \; # Find .log files and show context before the matching line
find /path -type f -name "*.txt" | xargs grep -n "pattern" # Find .txt files and show line numbers with matching text
find /path -type f -name "*.log" -exec grep -n "pattern" {} \; # Find .log files and show line numbers with matching text
# Find .txt files, search for a pattern, sort lines, and remove duplicates
find /path -type f -name "*.txt" | xargs grep "pattern" | sort | uniq
# Find .log files, search for a pattern, extract filenames, sort lines, and remove duplicates
find /path -type f -name "*.log" -exec grep "pattern" {} \; | cut -d : -f 1 | sort | uniq
# Find .txt files, count lines with a pattern, and calculate the total count
find /path -type f -name "*.txt" | xargs grep -c "pattern" | awk '{sum += $1} END {print sum}'
# Find .log files, search for a pattern, show line numbers, and exclude specific lines
find . -type f -name "*.log" -exec grep -n "pattern" {} \; | grep -v "exclude"
# Find .txt files, search for a pattern, exclude specific lines, and count lines
find /path -type f -name "*.txt" -exec grep "pattern" {} \; | grep -v "exclude" | wc -l
# Find .log files, search for a pattern, exclude specific lines, extract filenames, sort lines, and remove duplicates
find /path -type f -name "*.log" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq
# Find .txt files, search for a pattern, exclude specific lines, extract filenames, sort lines, remove duplicates, and copy files to a destination
find /path -type f -name "*.txt" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq | xargs -I {} cp {} /path/to/destination
# Find .log files, search for a pattern, exclude specific lines, extract filenames, sort lines, remove duplicates, and remove files
find /path -type f -name "*.log" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq | xargs -I {} rm {}
Common file and text search commands: Advanced search commands with pipelines: These examples demonstrate the power of combining find, grep, awk, and sed for effective file and text manipulation. These tools are essential for searching, analyzing, and processing data efficiently.
find / -type f -name "*.log" # Find all .log files in the root directory
grep -ri "error" /var/log/ # Recursively search for 'error' in /var/log, case insensitive
find /home/user -type f -iname "config*" | xargs grep -i "setting" # Find files starting with 'config' and search for 'setting'
grep -rl "pattern" /path/ # Search recursively in /path for files containing 'pattern', list filenames only
find /etc -type f -exec grep -H 'httpd' {} \; # Find files in /etc and grep 'httpd' in them, show filenames
grep -v "exclude" file # Show lines that do not contain 'exclude'
find / -type f -mmin -60 # Find files modified within the last hour
find /var/log -type f -name "*.log" | xargs grep "error" | awk '{print $4, $5}' # Find log files and print 4th and 5th words of lines containing 'error'
find . -type f -exec grep -qi "pattern" {} \; -print # Quietly check for 'pattern' and print filenames
find / -type f -name "*.php" | xargs grep -i "mysqli_connect" # Find PHP files and search for 'mysqli_connect'
grep "pattern" file | sed 's/pattern/replacement/g' # Search for 'pattern' and replace it in the output
grep -r "pattern" /path | awk -F: '{print $1}' | uniq # Search for 'pattern', get unique filenames
find /var/log -type f | xargs grep -i "error" | sort | uniq -c # Find log files, grep 'error', sort, and count unique lines
grep -ri "pattern" /path | awk '{print $1}' | sort | uniq # Recursively search for 'pattern', print first field, sort, and remove duplicates
find / -type f -perm 0777 | xargs grep "confidential" # Find world-writable files and search for 'confidential'
cat file | grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text
find /path -type f -exec grep -l "todo" {} \; | xargs sed -i 's/todo/TODO/g' # Find files with 'todo', mark them as 'TODO'
Find files and extract specific fields: Find files and calculate totals: Find files and format output: Find files and perform calculations: Combining the awk command with find allows you to search for files based on specific criteria and perform text processing tasks on those files. By using find to locate files and awk to extract specific fields, calculate totals and averages, format output, and perform other text processing operations, you can create powerful text processing workflows to analyze and manipulate text data in files.
find /path -type f -name "*.txt" -exec awk '{print $1}' {} \; # Find .txt files and extract the first field
find /path -type f -name "*.log" -exec awk '{print $2}' {} \; # Find .log files and extract the second field
find /path -type f -name "*.txt" -exec awk '{sum += $1} END {print sum}' {} \; # Find .txt files and calculate the sum of the first field
find /path -type f -name "*.log" -exec awk '{sum += $2} END {print sum}' {} \; # Find .log files and calculate the sum of the second field
Search for text in files and extract specific fields: Search for text in files and calculate totals: Search for text in files and format output: Search for text in files and perform calculations: Combining the awk command with grep allows you to search for text patterns in files and perform text processing tasks on the matching lines. By using grep to search for text patterns and awk to extract specific fields, calculate totals and averages, format output, and perform other text processing operations, you can create powerful text processing workflows to analyze and manipulate text data in files.
grep "pattern" file | awk '{print $1}' # Search for text in a file and extract the first field
grep "pattern" file | awk '{print $2}' # Search for text in a file and extract the second field
grep "pattern" file | awk '{sum += $1} END {print sum}' # Search for text in a file and calculate the sum of the first field
grep "pattern" file | awk '{sum += $2} END {print sum}' # Search for text in a file and calculate the sum of the second field
Bonus Other examples that provide additional ways to assess files based on name, type, content, size, permissions, owner, group, and modified time. Commands like locate, file, strings, du, stat, and find are used to locate files, determine file types, display file content, show file sizes, view file permissions, and check file modification times. These commands are useful for performing detailed searches and analysis of files on a Linux system.
locate filename # Search for files by name
locate -i filename # Search for files by name (case-insensitive)
file filename # Determine file type
file -b filename # Show only the file type
strings filename # Display printable strings in a file
strings -n 10 filename # Display strings longer than 10 characters
du -h filename # Show file size
du -sh filename # Show total file size
stat filename # Show file permissions
stat -c "%a %n" filename # Show file permissions in octal format
stat -c "%U %n" filename # Show file owner
stat -c "%G %n" filename # Show file group
stat -c "%y %n" filename # Show file modification time
Text Processing and Transformation
Print specific columns: Print lines with specific patterns: Print lines with specific field values: Calculate totals and averages: Format output: Other Examples: The awk command is a powerful text processing tool that allows you to manipulate and analyze text data in files. By specifying patterns, field values, and calculations, you can extract specific columns, print lines with patterns, calculate totals and averages, and format output. The awk command is commonly used in combination with other commands like grep and sort to perform complex text processing tasks.
awk '{print $1}' file # Print the first column
awk '{print $2}' file # Print the second column
awk '{print $1, $3}' file # Print the first and third columns
awk '/pattern/' file # Print lines with a specific pattern
awk '!/pattern/' file # Print lines without a specific pattern
awk '$1 == "value"' file # Print lines where the first field equals a specific value
awk '$2 > 10' file # Print lines where the second field is greater than 10
awk '{sum += $1} END {print sum}' file # Calculate the sum of the first column
awk '{sum += $1} END {print sum/NR}' file # Calculate the average of the first column
awk '{print $1, $2, $3, $4}' file # Print the first four columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$3 ~ /pattern/' file # Print lines where the third column matches a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
awk '{print $NF}' file # Print the last column
Substitution - Basic: Substitution - Intermediate: Substitute text in specific lines: Substitute text in specific columns: Delete lines with specific patterns: Append text after specific lines: Print specific lines: Advanced Examples: The sed command is a stream editor that allows you to perform text transformations on files. By specifying patterns, replacements, line numbers, and columns, you can substitute text, delete lines, append text, and perform other text editing operations. The sed command is commonly used in combination with other commands like grep and awk to process and manipulate text data in files.
sed 's/old/new/' file # Substitution, replace first occurrence of 'old' with 'new' in each line
sed 's/old/new/g' file # Global substitution, replace all occurrences of 'old' with 'new'
sed 's/old/new/Ig' file # Substitute with case insensitivity
sed -i.bak 's/old/new/g' file # Substitute and back up original file with a .bak extension
sed -i 's/old/new/g' file # Substitute and overwrite original file
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement
sed '/pattern/s/old/new/g' file # Substitute 'old' with 'new' in lines with a specific pattern
sed '2s/pattern/replacement/' file # Substitute the first occurrence of a pattern in the second line
sed '2,4s/pattern/replacement/' file # Substitute the first occurrence of a pattern in lines 2 to 4
sed '3,5s/old/new/' file # Substitute 'old' with 'new' for lines 3 to 5
sed '3,10d' file # Delete lines from the 3rd to the 10th
sed 's/pattern/replacement/2' file # Substitute the first occurrence of a pattern in the second column
sed '/pattern/d' file # Delete lines with a specific pattern
sed '/start/,/end/d' file # Delete lines between start and end of patterns
sed '2a\text' file # Append text after the second line
sed '1,3a\text' file # Print lines from the 5th to the end # Append 'text' to lines 1 to 3
sed 's/pattern/&\nnew line/' file # Append a new line after a pattern
sed ':a;N;$!ba;s/\n/, /g' file # Replace newlines, turning multiple lines into a single line
sed 's/[^0-9]*//g' file # Remove non-numeric characters
sed 's/[^a-zA-Z]*//g' file # Remove non-alphabetic characters
sed 's/[^0-9]*//g' file | awk '{sum += $1} END {print sum}' # Remove non-numeric characters and calculate the sum
sed 's/[^a-zA-Z]*//g' file | awk '{print tolower($1)}' # Remove non-alphabetic characters and convert to lowercase
sed -n '10,20p' file | sort # Print lines 10 to 20 and sort them
sed '$!N; /^\(.*\)\n\1$/!P; D' file # Remove duplicate consecutive lines
sed 's/$/\r/' file | tr -d '\n' > output # Convert Unix line endings to Windows line endings
sed -n '/pattern/p' file | grep 'something' # Search for a pattern and filter the output
sed '/baz/s/foo/bar/g' file | awk '{print $1}' # Substitute 'foo' with 'bar' in lines with 'baz' and print the first column
Sort lines in a file: Sort unique lines: Sort lines in multiple files: Sort lines in a file by field: The sort command is used to sort lines in a file based on specific criteria such as alphabetical order, numerical order, and field values. By specifying options like -r for reverse order, -n for numerical sort, and -k for sorting by field, you can customize the sorting behavior. The sort command is commonly used in combination with other commands like uniq and awk to process and analyze text data in files.
sort file # Sort lines in a file
sort -r file # Sort lines in reverse order
sort -n file # Sort lines numerically
sort -k 2 file # Sort lines based on the second column
Remove duplicate lines in a file: Remove duplicate lines in sorted file: Remove duplicate lines in unsorted file: Remove duplicate lines based on fields: The uniq command is used to remove duplicate lines in a file. By specifying options like -c for counting duplicate lines, -d for showing only duplicate lines, -f for skipping fields, and -s for skipping characters, you can customize the behavior of the uniq command. The uniq command is commonly used in combination with other commands like sort and awk to process and analyze text data in files.
uniq file # Remove duplicate lines in a file
uniq -c file # Count duplicate lines in a file
uniq -d file # Show only duplicate lines
sort file | uniq # Sort and remove duplicate lines
sort file | uniq -c # Sort and count duplicate lines
sort file | uniq -d # Sort and show only duplicate lines
Translate characters in a file: Translate characters in a string: Translate characters in a file based on sets: Translate characters in a file based on ranges: The tr command is used to translate characters in a file or string. By specifying sets, ranges, and options like -d for deleting characters and -s for squeezing characters, you can customize the behavior of the tr command. The tr command is commonly used to perform character transformations and manipulations on text data in files.
tr 'a-z' 'A-Z' < file # Translate lowercase to uppercase
tr -d '0-9' < file # Delete digits
tr -s ' ' < file # Squeeze spaces
echo "hello" | tr 'a-z' 'A-Z' # Translate lowercase to uppercase
echo "12345" | tr -d '0-9' # Delete digits
echo "hello world" | tr -s ' ' # Squeeze spaces
Extract fields from a file: Extract characters from a file: Extract fields based on delimiter: Extract fields from a string: The cut command is used to extract fields and characters from a file or string. By specifying options like -f for fields, -c for characters, and -d for delimiters, you can customize the behavior of the cut command. The cut command is commonly used to process and extract specific data from text files.
cut -f 1 file # Extract the first field
cut -f 2,3 file # Extract the second and third fields
cut -f 1-3 file # Extract the first to third fields
cut -c 1 file # Extract the first character
cut -c 2-4 file # Extract the second to fourth characters
cut -c -4 file # Extract the first to fourth characters
Build and execute command lines from standard input: Copy files to a destination: Remove files: Execute commands with xargs: The xargs command is used to build and execute command lines from standard input. By combining xargs with find, grep, and other commands, you can create powerful text processing workflows to search for text in files, copy files to destinations, remove files, and execute commands on files. The xargs command is commonly used to process and manipulate text data in files and directories.
echo "file1 file2 file3" | xargs ls # List files
echo "file1 file2 file3" | xargs -n 1 ls # List files one by one
find /path -type f -name "*.txt" | xargs -I {} cp {} /path/to/destination # Copy .txt files to a destination
find /path -type f -name "*.txt" | xargs grep "pattern" # Search for a pattern in .txt files
find /path -type f -name "*.log" | xargs grep -v "pattern" # Search for lines without a pattern in .log files
find /path -type f -name "*.txt" | xargs -I {} mv {} /path/to/destination # Move .txt files to a destination
Merge lines from multiple files: Merge lines from multiple files vertically: Merge lines from multiple files with line numbers: Merge lines from multiple files with specific formatting: The paste command is used to merge lines from multiple files horizontally or vertically. By specifying options like -d for delimiters and -s for merging lines from a single file, you can customize the behavior of the paste command. The paste command is commonly used to combine and format text data from multiple files.
paste file1 file2 # Merge lines from two files
paste -d : file1 file2 # Merge lines with a specific delimiter
paste -s file # Merge lines from a single file
Join lines from multiple files based on fields: Join lines from multiple files based on fields and fields numbers: Join lines from multiple files with specific formatting: The join command is used to merge lines from multiple files based on common fields. By specifying options like -t for delimiters, -1 and -2 for field numbers, and -o for output fields, you can customize the behavior of the join command. The join command is commonly used to combine and format text data from multiple files.
Putting Them Together
Combine commands with pipelines: Use pipelines with redirection: Use pipelines with conditional operators: Pipelines allow you to combine multiple commands and utilities to perform complex operations on text data. By using the pipe symbol (|) to connect commands, you can pass the output of one command as input to another command. Pipelines are commonly used to process, filter, and analyze text data in files and directories. By combining commands like cat, grep, awk, sort, uniq, and xargs with pipelines, you can create powerful text processing workflows.
cat file | grep pattern | wc -l # Count lines with a specific pattern in a file
ls -l | grep "file" | awk '{print $9}' # List files with a specific pattern and print filenames
find /path -type f | xargs grep "pattern" # Find files and search for a pattern in them
Use awk and sed for text processing: Use awk and sed with pipelines: Use awk and sed with find and grep: The awk and sed commands are powerful text processing tools that allow you to manipulate and analyze text data in files. By specifying patterns, field values, calculations, and substitutions, you can extract specific columns, print lines with patterns, calculate totals, format output, substitute text, delete lines, and append text. The awk and sed commands are commonly used in combination with other commands like grep, sort, and uniq to perform complex text processing tasks.
awk '{print $1, $2}' file # Print the first two columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
awk '{sum += $1} END {print sum}' file # Calculate the sum of the first column
awk '{printf "%-10s %-10s\n", $1, $2}' file # Format output with specific spacing
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement
sed '/pattern/d' file # Delete lines with a specific pattern
sed '2a\text' file # Append text after the second line
cat file | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Print specific columns and substitute text
grep pattern file | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.log" -exec grep "pattern" {} \; | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .log files, search for a pattern, print specific columns, and substitute text
Use sort and uniq for text processing: Use sort and uniq with pipelines: Use sort and uniq with find and grep: The sort and uniq commands are used for sorting lines and removing duplicate lines in text files. By combining sort and uniq with pipelines, you can process and analyze text data by sorting lines, counting duplicate lines, and removing duplicates. The sort and uniq commands are commonly used in combination with other commands like grep, awk, and sed to perform advanced text processing tasks.
sort file | uniq # Sort lines and remove duplicate lines
sort -r file | uniq -c # Sort lines in reverse order and count duplicate lines
sort -n file | uniq -d # Sort lines numerically and show only duplicate lines
Use tr and cut for text processing: Use tr and cut with pipelines: Use tr and cut with find and grep: The tr and cut commands are used for translating characters and extracting fields or characters from text files. By specifying sets, ranges, delimiters, and options, you can customize the behavior of the tr and cut commands to perform character transformations and data extraction. The tr and cut commands are commonly used in combination with other commands like grep, awk, and sed to process and analyze text data in files.
tr 'a-z' 'A-Z' < file # Translate lowercase to uppercase
tr -d '0-9' < file # Delete digits
tr -s ' ' < file # Squeeze spaces
cut -f 1 file # Extract the first field
cut -c 1 file # Extract the first character
cut -d : -f 1 file # Extract the first field based on :
cat file | tr 'a-z' 'A-Z' | cut -d : -f 1 # Translate lowercase to uppercase and extract the first field
grep pattern file | tr -d '0-9' | cut -c 1 # Search for a pattern, delete digits, and extract the first character
find /path -type f -name "*.txt" | xargs grep "pattern" | tr 'a-z' 'A-Z' | cut -d : -f 1 # Find .txt files, search for a pattern, translate lowercase to uppercase, and extract the first field
find /path -type f -name "*.log" -exec grep "pattern" {} \; | tr -d '0-9' | cut -c 1 # Find .log files, search for a pattern, delete digits, and extract the first character
Use paste and join for text processing: Use paste and join with pipelines: Use paste and join with find and grep: The paste and join commands are used to merge lines from multiple files based on common fields or delimiters. By specifying options like -d for delimiters, -s for merging lines from a single file, and -1 and -2 for field numbers, you can customize the behavior of the paste and join commands. The paste and join commands are commonly used to combine and format text data from multiple files.
paste file1 file2 # Merge lines from two files
paste -d : file1 file2 # Merge lines with a specific delimiter
paste -s file # Merge lines from a single file
join file1 file2 # Join lines based on common fields
join -t : file1 file2 # Join lines with a specific delimiter
join -1 2 -2 1 file1 file2 # Join lines based on the second field of the first file and the first field of the second file
cat file1 file2 | paste -d : - - # Combine files and merge lines with a specific delimiter
grep pattern file | paste -s - | join - file2 # Search for a pattern, merge lines from a single file, and join lines with another file
find /path -type f -name "*.txt" | xargs grep "pattern" | paste -s - | join - file2 # Find .txt files, search for a pattern, merge lines from a single file, and join lines with another file
find /path -type f -name "*.log" -exec grep "pattern" {} \; | paste -d : - - | join - file2 # Find .log files, search for a pattern, merge lines with a specific delimiter, and join lines with another file
Combine awk, sed, and sort for text processing: Use awk, sed, and sort with pipelines: Use awk, sed, and sort with find and grep: Combining awk, sed, and sort commands allows you to manipulate and analyze text data in files. By specifying patterns, field values, substitutions, and sorting criteria, you can extract specific columns, print lines with patterns, substitute text, and sort lines. By combining awk, sed, and sort with pipelines, you can create powerful text processing workflows to process and analyze text data in files and directories.
awk '{print $1, $2}' file | sed 's/pattern/replacement/' | sort # Print specific columns, substitute text, and sort lines
awk '/pattern/ {print $1, $2}' file | sed 's/pattern/replacement/g' | sort -r # Print specific columns in lines with a pattern, substitute all occurrences of a pattern, and sort lines in reverse order
cat file | awk '{print $1, $2}' | sed 's/pattern/replacement/' | sort # Print specific columns, substitute text, and sort lines
grep pattern file | awk '{print $1, $2}' | sed 's/pattern/replacement/g' | sort -r # Search for a pattern, print specific columns, substitute all occurrences of a pattern, and sort lines in reverse order
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' | sort # Find .txt files, search for a pattern, print specific columns, substitute text, and sort lines
find /path -type f -name "*.log" -exec grep "pattern" {} \; | awk '{print $1, $2}' | sed 's/pattern/replacement/g' | sort -r # Find .log files, search for a pattern, print specific columns, substitute all occurrences of a pattern, and sort lines in reverse order
Combine tr, cut, and paste for text processing: Use tr, cut, and paste with pipelines: Use tr, cut, and paste with find and grep: Combining tr, cut, and paste commands allows you to translate characters, extract fields or characters, and merge lines from files. By specifying sets, ranges, delimiters, and options, you can customize the behavior of the tr, cut, and paste commands to perform character transformations, data extraction, and line merging operations. By combining tr, cut, and paste with pipelines, you can create powerful text processing workflows to process and analyze text data in files and directories.
tr 'a-z' 'A-Z' < file | cut -d : -f 1 | paste -s - # Translate lowercase to uppercase, extract the first field, and merge lines from a single file
tr -d '0-9' < file | cut -c 1 | paste -d : - - # Delete digits, extract the first character, and merge lines with a specific delimiter
cat file | tr 'a-z' 'A-Z' | cut -d : -f 1 | paste -s - # Translate lowercase to uppercase, extract the first field, and merge lines from a single file
grep pattern file | tr -d '0-9' | cut -c 1 | paste -d : - - # Search for a pattern, delete digits, extract the first character, and merge lines with a specific delimiter
find /path -type f -name "*.txt" | xargs grep "pattern" | tr 'a-z' 'A-Z' | cut -d : -f 1 | paste -s - # Find .txt files, search for a pattern, translate lowercase to uppercase, extract the first field, and merge lines from a single file
find /path -type f -name "*.log" -exec grep "pattern" {} \; | tr -d '0-9' | cut -c 1 | paste -d : - - # Find .log files, search for a pattern, delete digits, extract the first character, and merge lines with a specific delimiter
Common uses of grep, awk, and sed: Common uses of grep, awk, and sed with pipelines: Grep, awk, and sed are powerful text processing tools that are commonly used in combination to search for patterns, extract specific data, and manipulate text in files. By combining grep, awk, and sed with pipelines, you can create advanced text processing workflows to process and analyze text data in files and directories.
grep -r "pattern" /path # Search for a pattern recursively in a directory
grep -r "pattern" /path | awk '{print $1, $2}' # Search for a pattern and print specific columns
grep -r "pattern" /path | sed 's/pattern/replacement/' # Search for a pattern and substitute text
awk '{print $1, $2}' file # Print the first two columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed -i 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement and overwrite the original file
sed '/pattern/d' file # Delete lines with a specific pattern
cat file | grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text