building a yaf+silk testing vm
Introduction
I am currently in the process of developing a collection of tools to provide network security monitoring and traffic analysis capabilities on a Raspberry Pi 3 called berry sense. After speaking with my mentor about this, he suggested I take a look at SiLK, a set of traffic analysis tools. Before any real development could begin, I needed to create a testing environment for these tools. This VM would be used solely for testing YaF+SiLK and generating output files that could later be used as template files for understanding how to process the data.
SiLK
The SiLK Suite is a set of traffic analysis tools developed by the CERT Network Situational Awareness Team (CERT NetSA).
From the main website:
The SiLK tool suite supports the efficient collection, storage, and analysis of network flow data, enabling network security analysts to rapidly query large historical traffic data sets.
The suite primarily consists of command-line tools for processing SiLK flow records generated by the SiLK Packing System. The packing system is comprised of daemon applications that collect flow data and convert them to a more space-efficient outputs binary format which can then be partitioned, sorted, and counted using the collection of tools provided. Flow data must be IPFIX flows generated by yaf or NetFlow v5 or v9 PDUs from a suitable router. The application we will be using is rwflowpack
.
YaF
Silk collects records from sensors, typically routers. Since I had no suitable routers available, I would have to use yaf to generate IPFIX flows from the raw network traffic.
YaF (Yet Another Flowmeter) “processes packet data from pcap dumpfiles as generated by tcpdump or via live capture from an interface using pcap into bidirectional flows, then exports those flows to IPFIX Collecting Processes or in an IPFIX-based file format”.
YaF will be used to process saved pcap files from scheduled captures or do live captures and send them to rwflowpack
so we can then further process the data using the other SiLK tools.
Testing Environment Setup
I decided to begin testing YaF and SiLK by installing both to a virtual machine.
Tools
- Virtualbox 5.1.14
- silk-3.14.0
- yaf-2.8.4
Virtual Machine
- OS: Ubuntu Server 16.04.2
- RAM: 1GB
- Network Adapters
- enp0s3 - NAT adapter (192.168.56.0/24)
- enp0s8 - Host-only adapter (10.10.16.0/24)
YaF+SiLK Installation
After the OS was installed and configured, the next step was installing YaF and SiLK. Both packages are available for download from the following sites:
I began with the YaF installation first.
YaF
Dependencies
Before building YaF, there were a few dependencies that needed to be installed.
sudo apt-get install build-essential pkg-config libfixbuf3 libfixbuf3-dev libpcap0.8-dev
Installation
Once these dependencies were installed, I moved into the directory where I had extracted yaf and went through the standard build process.
./configure
make
sudo make install
This produced a successful build and installation.
Resource: YaF Documentation
SiLK
Dependencies
Most of SiLK’s dependencies were covered by YaF, so all that wsa left to install was python-dev
.
sudo apt-get install python-dev
Installation
Before installing Silk, it was necessary to create a directory where it’s data would be stored. Following the documentation from the CERT site, I created /data
as the storage directory.
sudo mkdir /data
Next, I changed into the directory where I had extracted SiLK and went through the standard build process again, passing the --with-python
flag to ./configure
to include Python support.
./configure --with-python
make
sudo make install
Installation completed successfully.
Per the documentation, to avoid exporting LD_LIBRARY_PATH each time SiLK is used, I added the following paths to ld.so.conf like so:
cat <<EOF >>silk.conf
/usr/local/lib
/usr/local/lib/silk
EOF
sudo mv silk.conf /etc/ld.so.conf.d/
Finally, I ran ldconfig to update changes:
sudo ldconfig
Configuring SiLK
First, I copied the default configuration from the directory where SiLK was extracted at site/twoway/silk.conf
to the /data
directory.
sudo cp site/twoway/silk.conf /data
Next, I created the sensors.conf
file with the following settings:
probe S0 ipfix
listen-on-port 18001 # you may need to allow this port through the firewall so that yaf can talk to it
protocol tcp
listen-as-host 127.0.0.1
end probe
group my-network
ipblocks 192.168.56.0/24 # address of the host-only adapter (change this to the appropriate address)
ipblocks 10.0.2.0/24 # address of the NAT adapter
end group
sensor S0
ipfix-probes S0
internal-ipblocks @my-network
external-ipblocks remainder
end sensor
And saved this file to /data
.
Configuring rwflowpack
I first made a copy of the default configuration stored at /usr/local/share/silk/etc/rwflowpack.conf
and changed the these settings to the following values:
ENABLED=yes
CREATE_DIRECTORIES=yes
SENSOR_CONFIG=/data/sensors.conf
SITE_CONFIG=/data/silk.conf
LOG_TYPE=legacy
LOG_DIR=/var/log/
And saved the file to /usr/local/etc/
.
Finally, I copied the start up script that was installed by SiLK to /etc/init.d
, set it to start on boot, and started the service.
sudo cp /usr/local/share/silk/etc/init.d/rwflowpack /etc/init.d
sudo sudo update-rc.d rwflowpack start 20 3 4 5 .
sudo service rwflowpack start
Per default settings, rwflowpack
listens on port 18001.
Conclusion
I now have a working standalone flow collection and analysis test environment. I can now we use rwfiler
and other tools provided SiLK to generate files then can later be process and formatted further with code. A future post will show the results of running some tests on the local machine and generating records.
YaF+SiLK Usage Example
In order to begin capturing and generating records, YaF must be started. Here is an example:
sudo nohup yaf --silk --ipfix=tcp --live=pcap --out=127.0.0.1 --ipfix-port=18001 --in=eth0 --applabel --max-payload=384 &
NOTE: Change eth0
to the appropriate interface we want to capture on.
Now we can generate some traffic using ping or some other method and wait for the records to be flushed. It can take between 10-15 for the first records to be flushed. Coffee time!