netdata real-time performance monitoring for Linux

Netdata is a real-time performance monitoring solution.

Unlike other solutions that are only capable of presenting statistics of past performance, netdata is designed to be perfect for real-time performance troubleshooting.

Netdata is a linux daemon you run, which collects data in realtime (per second) and presents a web site to view and analyze them. The presentation is also real-time and full of interactive charts that precisely render all collected values.

Netdata has been designed to be installed on every system, without disrupting the applications running on it:

  • It will just use some spare CPU cycles (check Performance).
  • It will use the memory you want it have (check Memory Requirements).
  • Once started and while running, it does not use any disk I/O, apart its logging (check Log Files). Of course it saves its DB to disk when it exits and loads it back when it starts.
  • You can use it to monitor all your systems and applications. It will run on Linux PCs, servers or embedded devices.

Out of the box, it comes with plugins that collect key system metrics and metrics of popular applications.

Available here: https://github.com/firehol/netdata

Midnight Commander file size format

When dealing with large files in MC, I have difficulties counting the digits to get the order of magnitude of the file size (hundreds of MB, or tens of GB, etc.). Sometimes, I use the trick to press insert key, which highlights the file and shows the file size in a nicely formatted way (i.e. 123,456,789), which makes it a thousand times more readable.

You can modify the configuration:

You can adjust the displayed digits with the column size option, see the “Listing mode” section in the manual. The file to edit is ~/.config/mc/panels.ini.

To list the file sizes as K, M or G use a narrow size column using the user_format key:

[New Left Panel]
user_format=half type name mark size:4 space mtime

Cherrytree Note Taking / Organiser

Cherrytree is a hierarchical note taking application, featuring rich text and syntax highlighting, storing data in a single xml or sqlite file.

Features:

  • rich text (foreground color, background color, bold, italic, underline, strikethrough, small, h1, h2, h3, subscript, superscript, monospace)
  • syntax highlighting supporting several programming languages
  • images handling: insertion in the text, edit (resize/rotate), save as png file
  • embedded files handling: insertion in the text, save to disk
  • multi-level lists handling (bulleted, numbered, to-do and switch between them, multiline with shift+enter)
  • simple tables handling (cells with plain text), cut/copy/paste row, import/export as csv file
  • codeboxes handling: boxes of plain text (optionally with syntax highlighting) into rich text, import/export as text file
  • alignment of text, images, tables and codeboxes (left/center/right/fill)
    hyperlinks associated to text and images (links to webpages, links to nodes/nodes + anchors, links to files, links to folders)
  • spell check (using pygtkspellcheck and pyenchant)
  • intra application copy/paste: supported single images, single codeboxes, single tables and a compound selection of rich text, images, codeboxes and tables
  • cross application copy/paste (tested with libreoffice and gmail): supported single images, single codeboxes, single tables and a compound selection of rich text, images, codeboxes and tables
  • copying a list of files from the file manager and pasting in cherrytree will create a list of links to files, images are recognized and inserted in the text
  • print & save as pdf file of a selection / node / node and subnodes / the whole tree
  • export to html of a selection / node / node and subnodes / the whole tree
  • export to plain text of a selection / node / node and subnodes / the whole tree
  • toc generation for a node / node and subnodes / the whole tree, based on headers h1, h2 and h3
  • find a node, find in selected node, find in selected node and subnodes, find in all nodes
  • replace in nodes names, replace in selected node, replace in selected node and subnodes, replace in all nodes
  • iteration of the latest find, iteration of the latest replace, iteration of the latest applied text formatting
  • import from html file, import from folder of html files
  • import from plain text file, import from folder of plain text files
  • import from basket, cherrytree, epim html, gnote, keepnote, keynote, knowit, mempad, notecase, rednotebook, tomboy, treepad lite, tuxcards, zim
  • export to cherrytree file of a selection / node / node and subnodes / the whole tree
  • password protection (using http://www.7-zip.org/) – NOTE: while a cherrytree password protected document is opened, an unprotected copy is extracted to a temporary folder of the filesystem; this copy is removed when you close cherrytree
  • tree nodes drag and drop

Download and details:
http://www.giuspen.com/cherrytree/

400+ Free Resources for DevOps & Sysadmins

In 2014 Google indexed 200 Terabytes of data (1 T of data is equal to 1024 GB, to give you some perspective). And, it’s estimated that Google’s 200 TB is just .004% of the entire internet. Basically the internet is a big place with unlimited information.

So in an effort to decrease searching and increase developing, Morpheus Data published this massive list of free resources for DevOps engineers and System Admins, or really anyone wanting to build something useful out of the internet.

All these resources are free, or offer some kind of free/trial tier. You can use any/all of these tools personally, as a company, or even suggest improvements (in the comments). It’s up to you.

If you find this list useful, please share it with your DevOps/SysAdmin friends on your favorite social network, or visit Morpheus Data to learn how you can 4x your application deployment.

http://www.nextbigwhat.com/devops-sysadmin-tools-resources-297/?_utm_source=1-2-2

MySQL—Some Handy Know-How

From an article on Linux Journal http://www.linuxjournal.com/content/mysql%E2%80%94some-handy-know-how below are the commands to get you quickly up and running with MySQL. But the Linux Journal Site will provide many more examples and sample data etc.

Create database phplogcon and assign rsyslog access rights:

mysql -u root -p
create database phplogcon;
GRANT ALL ON phplogcon.* TO [email protected] IDENTIFIED BY "password";

Check database and connection with rsyslog works:

mysql -u rsyslog -p
connect phplogcon;
show tables;
quit

Create User:

CREATE USER 'keith'@'localhost' IDENTIFIED BY 'mypass';

Some basic / useful commands are as follows :

- connect to MySQL 
 
   mysql -uUsername -pPassword 
 
- connect to MySQL , directly to a database 
   
   mysql -uUsername -pPassword DbName 
 
- upload a MySQL schema into my Database 
 
   mysql -uUsername -pPassword DbName < schema.sql 
 
- dump a DB (copy DB for backup) 
 
   mysql -uUsername -pPassword DbName > contents-of-db.sql 
 
While connected to MySQL : 
 
- display all databases 
 
   show databases; 
 
- connect to a Database 
 
   use DbName; 
 
- view tables of a Database (must be connected to the Database) 
 
   show tables;

Create MySQL User for Backups e.g. sqlbu

CREATE USER 'sqlbu'@'localhost' IDENTIFIED BY  '***';
GRANT SELECT, SHOW VIEW, RELOAD, SHOW DATABASES, LOCK TABLES, EVENT, TRIGGER ON *.* TO 'sqlbu'@'localhost';

MySQL Backup Script to backup all Databases including new DBs added in the future:

#!/bin/bash
#=================================================================
# Backup script for MySQL Databases - this script will
# backup all MySQL Databases including any future additional DB's
#=================================================================
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/var/mysqlbu/$TIMESTAMP"
MYSQL_USER="sqlbu"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="Some really long complex password"
MYSQLDUMP=/usr/bin/mysqldump
MAILTO="[email protected]"

mkdir -p "$BACKUP_DIR/mysql"

databases=`$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`

for db in $databases; do
$MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --events --databases $db | gzip > "$BACKUP_DIR/mysql/$db.gz"
done

ls -lh $BACKUP_DIR/* > /var/mysqlbu/mysqlbu.rpt
mail -s "MySQL BU Notification - $TIMESTAMP" -a "From: [email protected]" $MAILTO < /var/mysqlbu/mysqlbu.rpt
# Cleanup older directories than 60 days
find /var/mysqlbu/ -type d -mtime +60 -prune -exec rm -rf {} \;
#find $BACKUP_DIR/ -type d -mtime +60 -exec rm -rf {} \;

purge-old-kernels (Ubuntu)

If you have long-running Ubuntu systems (server or desktop), and you keep those systems up to date, you will, over time, accumulate a lot of Linux kernels.

Canonical’s Ubuntu Kernel Team regularly (about once a month) provides kernel updates, patching security issues, fixing bugs, and enabling new hardware drivers. The apt utility tries its best to remove unneeded packages, from time to time, but kernels are a little tricky, due to their version strings.

Over time, you might find your /boot directory filled with vmlinuz kernels, consuming a considerable amount of disk space. Sometimes, sudo apt-get autoremove will clean these up. However, it doesn’t always work very well (especially if you install a version of Ubuntu that’s not yet released).

What’s the safest way to clean these up? (This question has been asked numerous times, on the UbuntuForums.org and AskUbuntu.com.)

The definitive answer is:

sudo purge-old-kernels

You’ll already have the purge-old-kernels command in Ubuntu 16.04 LTS (and later), as part of thebyobu package.  In earlier releases of Ubuntu, you might need to install bikeshed, you can grab it directly from Launchpad or Github.

The info above is from Dustin Kirkland’s Blog – http://blog.dustinkirkland.com/2016/06/purge-old-kernels.html

 

TagSpaces File Organiser

TagSpaces is an open source personal data manager. It helps you organize files with tags on every platform. It helps you organize files with tags on every platform. Organize your photos, recipes or invoices in the same way on every platform. Cross platform support for Windows, Linux, OS X, Android, Firefox and Chrome. With the help of tags you can do research better or you can manage projects using the GTD methodology. The application persists the tags in the file names. As a consequence, the tagging information is not vendor locked and can be used even without the TagSpaces application. The absence of a database, makes syncing of the tag meta information easy across different devices with services like Dropbox. TagSpaces features basic file management operations, so it is a kind of tag-based file manager.

Open and Extensible

TagSpaces is open sourced and published under the AGPL license. It is designed to be easily extended with different plugins for visualization of directory structures or for opening of different file types.

No Backend – No Login – No Cloud

TagSpaces is running completely offline on your computer, smartphone or tablet and does not require internet connection or online registration. You can still use platforms like ownCloud, Dropbox or Bittorrent Sync in order to sync your files between devices.

Ease of use

TagSpaces offers you a convenient web interface to your local file system. It is implemented in JavaScript and HTML5, which are the technologies behind most of the modern web applications.

tagspaces-tagging

 

 

 

 

 

More details and download:
https://www.tagspaces.org/

Mysterious cab files fill-up temp folder

A Windows server disk space is filling up fast due to cab files.

Upon closer inspection I found that every hour an unknown process would attempt to write a .cab file of approx 60MB to the Windows temp folder. Checking with Process Explorer I found that it was makecab.exe writing these files. Makecab was invoked by services.exe, so that was a bit of a dead end. I looked through the list of Windows scheduled tasks, but did not find anything that was supposedly run every hour.

The SFC.exe program writes the details of each verification operation and of each repair operation to the CBS.log file. The CBS.persist.log is generated when the CBS gets to be around 50Mb in size. CBS.log is copied to cbs.persist.log and a new cbs.log file is started. A bit of Google foo and we determine that the cbs logs would only be useful for serious troubleshooting issues. If the system is running fine, we can delete this file. SFC.exe will create a new one, next time it is run. I now speculate that the file size is larger than what is supported and the process fails, hence resulting in a partial .cab file that sits in the temp folder, rather than a complete .cab file in the CBS log folder.
I have deleted the offending .cab file and most of the other ones too, just keeping a few recent ones in case we need them. No more mysteries!

http://felixyon.blogspot.com.au/2013/03/mysterious-cab-files-fill-up-temp-folder.html

 

Duplicati Backup Software

Duplicati is a backup client that securely stores encrypted, incremental, compressed backups on cloud storage services and remote file servers. It works with Amazon S3, Windows Live SkyDrive, Google Drive (Google Docs), Rackspace Cloud Files or WebDAV, SSH, FTP (and many more). Duplicati is open source and free.

Duplicati has built-in AES-256 encryption and backups can be signed using GNU Privacy Guard. A built-in scheduler makes sure that backups are always up-to-date. Last but not least, Duplicati provides various options and tweaks like filters, deletion rules, transfer and bandwidth options to run backups for specific purposes.

Reference and Download:
http://www.duplicati.com