zstd as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios. It’s backed by a very fast entropy stage, provided by Huff0 and FSE library.
The project is provided as an open-source dual BSD and GPLv2 licensed C library, and a command line utility producing and decoding
.lz4 files. Should your project require another programming language, a list of known ports and bindings is provided on Zstandard homepage.
Installation is from repos:
sudo apt install zstd
Create a USB bootable device from an ISO image easily and securely.
Don’t want to messup the system with
dd command? Create a bootable USB from an ISO in one line [see it in action].
Works seamlessly with hybrid and non-hybrid ISOs (SYSLINUX or UEFI compliant) such as any linux ISO, Windows ISO or rescue live-cds like UltimateBootCD. You don’t have to tweak anything:
bootiso inspects the ISO file and chooses the best method to make your USB bootable.
bootiso [<options>...] <file.iso>
bootiso <action> [<options>...] <file.iso>
bootiso <action> [<options>...]
The default action
[install-auto] as per first synopsis is to install an ISO file to a USB device in automatic mode. In such mode, bootiso will analyze the ISO file and select the best course of actions to maximize the odds your USB stick be proven bootable (see automatic mode behavior).
<actions> are listed in this bellow section.
To have a quick feedback,
[probe] around to check bootiso capabilities with given ISO file and list USB drives candidates [watch video]:
bootiso -p myfile.iso
curl -L https://git.io/bootiso -O
chmod +x bootiso
grepcidr is a
grepcidr can be used to filter a list of IP addresses against one or more Classless Inter-Domain Routing (CIDR) specifications. As with grep, there are options to invert matching and load patterns from a file. grepcidr is capable of efficiently processing large numbers of IPs and networks.
grepcidr has endless uses in network software, including: mail filtering and processing, network security, log analysis, and many custom applications.
For detailed instructions and examples, please see the README file or man page. A couple examples of usage:
grepcidr 2001:db8::/32 logfile
grepcidr 184.108.40.206/19 access.log
Install grepcidr with your package manager:
sudo apt install grepcidr
For years I’ve used ncdu a NCurses Disk Usage utility for Linux. Recently someone alerted me to some other options as well as ncdu:
du + rust = dust. Like du but more intuitive, Dust is meant to give you an instant overview of which directories are using disk space without requiring sort or head. Dust will print a maximum of 1 ‘Did not have permissions message’. Dust will list the 20 biggest sub directories or files and will smartly recurse down the tree to find the larger ones. There is no need for a ‘-d’ flag or a ‘-h’ flag. The largest sub directory will have its size shown in red.
The Tin Summer:
sn is a replacement for du. It has nicer output, saner commands and defaults, and it even runs faster on big directories thanks to multithreading.
Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don’t have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.
Install from repos: sudo apt install ncdu
man page: https://dev.yorhel.nl/ncdu/man
tcpdump101.com is a great site that you can use to generate tcpdump commands, you enter the parameters it’s asks for and it will generate the command for you. It’s handy if you are not running tcpdump commands very often and then have to either look up the help/man pages or Google for the command switches you want. It also has output for Cisco and Checkpoint firewalls.
From there site they say… tcpdump101.com has been designed to help people capture packets on different devices to assist with network troubleshooting, service troubleshooting and even passive red team activities. There is an assumption that the user has a basic understanding of what they want to capture – As much as this is a tool to help people, the user has to use their own logic since every situation is different. That being said, I strongly suggest that if you’re just starting out with packet captures to grab a copy of Virtual Box and play around with Linux and tcpdump. Although tcpdump may not be what you ultimately use, it will give you an excellent understanding of what you’ll see, even with other products and vendors.
As a safety measure (if at all possible) make sure to set a capture limit on your PCaps. If you make a mistake building your filters, you may end up captuing a lot of traffic. Although the odds are slim, there is a chance that your PCap could fill the NIC buffer and start dropping packets. The worst-case scenario is that it runs out of memory while you’re logged in remotely. With today’s hardware, it most likely won’t happen however you should always expect the best and plan for the worst.
The Log File Navigator
Watch and analyze your log files from a terminal with lnav http://lnav.org/ for Linux and Mac. Just like CCZE https://ausinfotech.net/blog/colorize-log-files-with-ccze-tool/ lnav can produce easy readable logs in colour and also highlight important parts of the logs.
Single Log View
All log file contents are merged into a single view based on message timestamps. You no longer need to manually correlate timestamps across multiple windows or figure out the order in which to view rotated log files. The color bars on the left-hand side help to show which file a message belongs to.
Automatic Log Format Detection
The following formats are built in by default:
- Common Web Access Log format
- CUPS page_log
- VMware ESXi/vCenter Logs
- “Generic” – Any message that starts with a timestamp
See http://lnav.org/downloads for details and/or in Linux Debian/Ubuntu run:
sudo apt install lnav
The network configuration abstraction renderer
Netplan is a utility for easily configuring networking on a linux system. You simply create a YAML description of the required network interfaces and what each should be configured to do. From this description Netplan will generate all the necessary configuration for your chosen renderer tool.
The way you configure a network interface in Ubuntu 18.04 LTS is completely different than the previous Ubuntu 16.04 LTS, 18.04 uses a new methodology with a new tool called Netplan. In fact 17.10 already had this netplan tool, however I didn’t notice this until setting up an 18.04 server for the first time in a DMZ area with no DHCP. This new tool replaces the static interfaces (/etc/network/interfaces) now you must use /etc/netplan/*.yaml to configure Ubuntu interfaces – yes yaml files!
How does it work?
Netplan reads network configuration from /etc/netplan/*.yaml which are written by administrators, installers, cloud image instantiations, or other OS deployments. During early boot, Netplan generates backend specific configuration files in /run to hand off control of devices to a particular networking daemon.
How to configure it?
To configure netplan, save configuration files under /etc/netplan/ with a .yaml extension (e.g. /etc/netplan/config.yaml), then run sudo netplan apply. This command parses and applies the configuration to the system. Configuration written to disk under /etc/netplan/ will persist between reboots.
DHCP and static addressing
To let the interface named ‘enp3s0’ get an address via DHCP, create a YAML file with the following:
Now run this command to apply it:
sudo netplan apply
Set a static IP address:
search: [mydomain, otherdomain]
addresses: [10.10.10.1, 220.127.116.11]
Now run this command to apply it:
sudo netplan apply
inxi is a full featured CLI system information tool. It is available in most Linux distribution repositories, and also runs somewhat on BSDs.
Get the latest version from Github see below, or install from distro package e.g.
sudo apt install inxi
then simply run inxi
CPU~Single core Intel Xeon E5-2670 v2 (-MCP-) speed~2494 MHz (max) Kernel~4.4.0-116-generic x86_64 Up~22 days Mem~336.5/990.4MB HDD~12.9GB(36.9% used) Procs~146 Client~Shell inxi~2.2.35
I use apt all the time now, even on 14.04 Ubuntu servers (except for apt autoremove) and from 16.04 up I never touch apt-get. What’s the main difference, just Google it and you will found out specific details, for a quick run down read this below.
From the man page:
DIFFERENCES TO APT-GET(8)
The apt command is meant to be pleasant for end users and does not need
to be backward compatible like apt-get(8). Therefore some options are
· The option DPkg::Progress-Fancy is enabled.
· The option APT::Color is enabled.
· A new list command is available similar to dpkg –list.
· The option upgrade has –with-new-pkgs enabled by default.
Here is a table outline:
||apt-get install <package>
||apt install <package>
||apt-get remove <package>
||apt remove <package>
|Remove package including configuration
||apt-get purge <package>
||apt purge <package>
|Update packages (without removing or reinstalling)
|Update packages (with removing and reinstalling)
|Remove unnecessary dependencies
||apt-get search <package>
||apt search <package>
|Display package information
||apt-cache show <package>
||apt show <package>
|Display active package sources in detail
|Display available and installed package versions
||apt-cache policy <package>
||apt policy <package>
|Edit packages sources
|List packages by criteria
||dpkg –get-selections > list.txt
|Set/change package status
||echo <package> hold | dpkg –set-selections