What is syslog and what is it used for?

Introduction

Logging from Wikipedia:

“Logging is the cutting, skidding, on-site processing, and loading of trees or logs onto trucks or skeleton cars.”  /Wikipedia/
No, it’s a different industry. Again:
“In computing, a log file is a file that records either the events which happen while an operating system or other software runs, or the personal messages between different users of a communication software. The act of keeping a log is called logging.” /Wikipedia/

Event logging

Recording events on the specified system for different purposes. E.g. monitorting, debugging, audit etc.

Continue reading “What is syslog and what is it used for?”

Advertisements

Say bye-bye to the old trusty MD5

It is official: Microsoft is one of the big ones who’ll be retiring the venerable-but-vulnerable MD5 algorithm. Don’t worry, you’ll still be able to create MD5 hashes for your documents and verify them, but not for authentication and code signing anymore.

md5_logo_n1
The first chink in MD5’s armor was discovered in 1996; while not critical (MD5 creates 128-bit hashes – the vulnerability is in one of the 64 steps to create the hash value) security experts began recommending alternate algorithms. Both recommended replacement hash functions became obsolete since then.

How big of a security risk was the 1996 announcement? When something like this comes up, cryptoanalysts begin investigating, and creating scenarios, how the fuction can be compromised. It took 8 years, and an unimaginable increase in computing power to crack the MD5 hashing algorithm. The server the chinese analysts demonstrated on (a pSeries IBM) reportedly had 24 Power processors and 1TB RAM – to find a collision with a randomly given MD5 hash took less, than 1 hour.

power6
What is a collision? Basically you take two files which differ in size and (obviously) content, you run your favourite md5sum command on them, and surprise-surpise, the files got identical hashes. No big deal, really? Imagine then what horror Adobe’s programmers felt, when _all_ their user data, passwords, hints, everything was leaked. The passwords were of course left encrypted, but savvy users soon found by sorting the data by password hash, that there were many similarities, even when the password hints indicated completely different. In MySQL databases you have the option to have your fields be MD5 encrypted, and many authentication algorithms simply create a hash when you put your password in the password field, and then compare it to the stored value in the sql database.

login_field

Using the cracking method outlined in the 2004 announcement, and some (cheap!) hardware, a password can be created from an MD5 hash value. It won’t be the original password, but since the hashed value will be the same – you’re in like Flynn. The hardware required is really not on the same level as in 2004 – today you can use the just about anything with a processor in it, a powerful GPU is one way, or use your bitcoin-mining FPGAs to create a program that just runs the blocks over and over, hundered million times a second. The only good thing about the published methods is, that you won’t be able to decode the orignal password, just replicate it with something that will be accepted as your password.

Read about the update in more detail on Microsoft’s website: Technet
Read about the MD5 function, its history, and the vulnerability: Wikipedia
Check if your personal data was leaked: ZDNet

Starting and keep running reboot persistent autossh on unix-like systems without root rights

Basic scenario

There are systems:

  • – which can’t be accessed from the public internet. E.g. behind ipv4 NAT and DMZ isn’t an option.
  • – which shouldn’t be access directly from the public internet. So firewal or other access control not suitable.

Solution

I wrote a simple shell script securing continuous running of autossh on unix-like systems.
The script is started by cron in every minutes, so no root rights is required. Allowed cron is necessary for your local user. I didn’t used @reboot because this crontab directive isn’t implemented on many Unix systems.
Autossh secures monitored ssh connection which opens a reversed ssh tunnel. If the connection losts it will be restarted by autossh.

The sshd is listening on the 22022 local port ot the host “sage”. So the my systems can be accessed only via local account of server “sage” which can be accessed from everywhere on the net.

The script

#!/bin/bash
#set -x

HOST=”sage”

AUTOSSH_PATH=”/usr/bin/ssh”
export AUTOSSH_PATH

AUTOSSH_PIDFILE=”/home/miam/bin/autossh.pid”
export AUTOSSH_PIDFILE

PIDFILE=”$AUTOSSH_PIDFILE”
AUTOSSH_CMD=”/usr/bin/autossh”

call_autossh ()
{
“$AUTOSSH_CMD” -M 22023 -N -R 22022:localhost:22 -f “$HOST”
}

self_check ()
{
if [ -f “$PIDFILE” ]; then
PID=`cat “$PIDFILE”`
kill -0 $PID
if [ $? -eq 1 ]; then
return 1
else
return 0
fi
else
return 1
fi
}

self_check
if [ $? -eq 1 ]; then
call_autossh
fi

The crontab entry

* * * * * /home/miam/bin/autossh.sh >> /home/miam/bin/autossh.log 2>&1

The repo

The code can be cloned from github:
https://github.com/miam/keeprun/tree/v0.1

Reconfiguring the ZFS dataset of a Solaris zone

We got a Solaris 10 system with many zones installed on it. The aim was to create a Primary and an Alternative Boot Environment for the Live Updgrade, and to patch the ABE with LU. The original zone-environment was configured like this:

Zonepath : /zones/zonepath/
Dataset  : zones/zonepath/dataset

In this configuration the ZFS dataset was a descendant of zonepath.
But this ZFS layout and zone config was not Live Upgrade compatible, because the dataset can not be a child of the zonepath, in this case lucreate will fail.
More info: DocID 1396382.1 – zfs dataset of filesystem configured inside a non-global zone is a descendant of zonepath dataset.

What can we do? We need to reconfigure the zone and the zfs dataset (filesystem) too.
In this post I describe the reconfiguration process.
Attention! –All data and ZFS property on the ZFS dataset will be lost if we make any mistake 🙂
If all goes well, no data nor setting will be lost.

Continue reading “Reconfiguring the ZFS dataset of a Solaris zone”

UFS to ZFS migration with UFS merge

I will describe in this post the process of the UFS – ZFS migration with UFS merge in two steps. I use Solaris Live Upgrade to this procedure.

Abbreviations:
BE: Boot Environment
PBE: Primary Boot Environment
ABE: Alternativ Boot Environment

1. Prepare the system to Live Upgrade

Before running Live Upgrade for the first time, you must install the latest Live Upgrade packages
frominstallation media and install the patches listed in the My Oracle Support knowledge
document 1004881.1 – Live Upgrade Software Patch Requirements (formerly 206844). Search
for the knowledge document 1004881.1 – Live Upgrade Software Patch Requirements (formerly
206844) on the MyOracle Support web site.
The latest packages and patches ensure that you have all the latest bug fixes and new features in
the release. Ensure that you install all the patches that are relevant to your system before
proceeding to create a new boot environment.

remove the following packages:
pkgrm SUNWlucfg SUNWluu SUNWlur
and reinstall the new version, use the Oracle Solaris Operating installer DVD:
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers
# ./liveupgrade20

 

Continue reading “UFS to ZFS migration with UFS merge”

Apache NameVirtualHost proxy and IIS 7 with pfSense

I am always hesitating to write an article because after I have found the solution I cannot beleive how hard was to find it. But it is over I must write it down.

I decided to implemet two IIS servers in my network but I have only one public IP address. I think everyone has figured out what was the trouble with this implementation. If do not I tell it.

The main issue with this that I could not reach both servers because of the only one public IP address therefore I have had to implement a router (and of  course a NAT device (actually it is only one device)). In this case I can do a port forwarding for port 80 but I can do this for one IP address.

Continue reading “Apache NameVirtualHost proxy and IIS 7 with pfSense”

Simple private cloud storage using WebDAV

WebDAV stand for the Web Distributed Authoring and Versioning.
For the end user it means file and diroctory manipulation in URL namespace. Operating systems with GUI are able to handle WebDAV resource as remote device.

Why this storage type is called cloud storage? Install this http server setup on 2 or more separate machine on different network and synchronize the data via network. I’ll use ZFS snapshots and the incremental send/receive capability of ZFS.

Continue reading “Simple private cloud storage using WebDAV”