Hello readers… This is going to be a long post where I (@attrc) attept to introduce ongoing (still very beta) work that I have been doing that allows for kernel-version generic processing of Linux memory images using the Volatility memory analysis framework. To keep things somewhat sane, this post will be broke into a couple of sections.
Let us begin…
1) Overview of functionality
First we will discuss the currently implemented plug-ins and features of the developed work. We will start with the plug-ins dealing with per-process data:
- Gathers active tasks from the task_struct->tasks list
- Like linux_task_list_ps except gathers process command line information from userland, will eventually include the start time of each process (needs more research)
- Gathers active process through the kmem_cache (see this paper for more information)
- Lists open files per process
- Lists socket information per process
- Lists process map information (like /proc/
/maps) per process
Each of the plug-ins by default will analyze every active process. If you want to limit by a process or some subset of processes then you can indicate them by a comma separated list of process IDs with the –p option, such as –p 1,104,15 or –p 12345.
Next the plug-ins related to networking information:
- Prints the ARP table
- Prints network interface information
- Prints the routing tables
- Prints entries from the routing cache table (see this paper for more information)
And finally some miscellaneous plug-ins:
- Prints the buffer shown by the dmesg command
- Prints currently loaded kernel modules
- Prints mounted devices as seen in /proc/mounts
2) Example command line usage
- Gathering active tasks
- python volatility.py -f [path to memory image] --profile=[name of profile ] linux_task_list_ps
For more information please see the Volatility README file in the branch root.
3) (Current) Caveats of version support
While the methodology for supporting the huge number of kernel version variations present in each Linux distribution is sound and automated, we currently have only generated profiles for our set of test kernels. Profiles are what allow Volatility to interact with a large number of kernels as they contain information about the investigated kernel including the System.map data and the in-memory layout of all structures. This allows for kernel version generic support and makes it trivial to support future kernels.
Currently we have generated profiles for the Ubuntu 184.108.40.206 and 220.127.116.11, Debian 2.6.26, and CentOS 2.6.9-89-EL kernels. These kernels and associated memory captures are what make up the current test bed and their version numbers are far apart enough to contain significant changes between kernel versions. Later in the post I will describe how to generate profiles for your own kernel if it is different from one of the above. Please do not try to use the current profiles on other kernels than the ones listed as they won’t work, even if they are closely related. Dealing with this issue will be substantially easier when a stable release with Linux support is done and is discussed later. The list of current profile names can be gathered by running python volatility.py –info .
Another caveat is that currently there is only kmem_cache support for SLAB based systems, so if you are using a distribution that utilizes SLUB you will receive an error along the lines of “Could not find a suitable allocator” when attempting to use kmem_cache based plug-ins. Again, SLUB support will be present in the first stable release.
The last discussed caveat is that currently the Linux support only works against 32 bit memory images as this is all Volatility supports. 64 bit support is currently planned and once the appropriate core Volatility functionality is developed, the Linux plug-ins will be thoroughly tested against 64 bit systems.
4) Developing and Testing still needed
Since the code is still in early beta stage there is need for further development and extensive testing. If you have coding and memory analysis skills, feel free to add your own plug-ins and research to the project. If you are looking for interesting plug-ins that still need to be developed then check the TODO file of the linux-support SVN branch. To get in touch with developers use the contact information found in the README file included in the branch root (IRC is usually best/quickest).
Testing is especially needed for popular distributions that are not included in our test bed (Redhat, SuSe, etc) and for the newest round of plug-ins (linux_arp, linux_route, linux_route_cache). If you are going to test one of the mentioned distros or your own kernel, you can generate a profile using two easy steps. In the following example I will assume you are going to generate a kernel for a 2.6.16 SuSe kernel.
First you need to install dwarfdump, which is available in source form or through most distributions’ package repositories. Once dwarfdump is still you then need the System.map for the running kernel and a debug version of the kernel (vmlinux). Most distributions package debug versions of their kernel within the repositories, and if you compile your own kernel, the compilation process will produce a file named “vmlinux” within the directory you type ‘make’ (usually /usr/src/linux).
You then run the command as:
dwarfdump –di vmlinux-file > dd-out
Next, you use the Python script included with Volatility tools/dwarfparse.py like so:
python tools/dwarfparse.py –s [System map file path] dd-out > suse_2_6_16_vtypes.py
Once the vtype file is created it then needs to be copied into the volatility/plug-ins/overlays/linux/ directory. To finish, simply copy and modify one of the existing profile scripts, such as centos.py, and name the profile appropriately to use your new vtypes.
If you need help creating a profile, please visit the Volatility IRC channel (#volatility on freenode) or comment on this post. In the stable release of Linux support this process will be fully automated and not require a debugging version of the kernel if the source code is present, which is the more common scenario. The stable release will also contain a large number of profiles and will hopefully minimize the need for users to create their own.
Obtaining a memory capture can be performed a few ways. The easiest is to use the suspend feature of Vmware Workstation (and possibly the free server), which creates a *.vmem file in the virtual machine’s data folder. This *.vmem file is a bit-for-bit copy of RAM and Volatility can be run directly against it. If you do not have access to Vmware you can run either the crash or fmem driver and then use dd to capture memory. If you choose to use fmem, please read its README file before attempting to use dd with it.
If you discover across a bug in the current code when testing, please file a bug using the online tracker which can be found here with instructions here .
5) Access to Source Code
Access to the current code can be found at http://code.google.com/p/volatility/source/browse/branches/linux-support/ and checked out by:
svn checkout http://volatility.googlecode.com/svn/branches/linux-support volatility-linux
If you encounter a bug or want to stay with the latest development, please be sure to update your checkout often as development is very active.
6) Future Plans
Besides the previously mentioned topics, future plans for Linux support can be found in the TODO file of the linux-support branch. Once the Volatility 1.4 is released, there will be a concentrated effort to get stable Linux support released, and we expect a number of improvements during that time.
7) Further Reading
If you are interested in Linux memory analysis and the theory behind the plug-ins there are a number of references to check. First are three books which deal extensively with Linux Internals, 1 2 3 . Second there are a number of published papers in the field of Linux memory analysis which can be found through these links 1 2
. You should also visit http://lxr.linux.no as it has a web-based LXR installation for all kernel versions and allows searching, cross referencing, and so on. It is an amazing time saver during research.
Hopefully this post was not too overwhelming and inspired people to give the new Linux support a try. In future posts we will be presenting some interaction with the new support including malware detection capabilities currently being developed. Be sure to follow the blog’s twitter account @dfsforensics in order to get the latest information.
9:50 AM and have 0 comments