VDFuse replacing GuestFS

Just a week back, we decided to replace GuestFS and use VDFuse. Though GuestFS comes with certain advantages, w.r.t VDFuse in terms
of support for several file formats which makes it easy to extend to other file formats and also python bindings, we faced a lot
of problems working with it. Some of them are:

  • Installing GuestFS is a big headache as there are lot of dependencies to be installed and even after that its not necessary that
    it works always. This might turn out to be one reason for developers to not try out PyTI.
  • GuestFS is not very stable . There were lot of times it didnt work and couple of issues of GuestFS hanging without any reason.
  • It takes a lot of time initializing GuestFS for reading or writing into the disk

Yes, as I mentioned before, we moved to VDFuse for now. It grows out of the difficulties of GuestFS and it is a decent solution for
reading or writing to the disk.

Lessons learnt

Whenever I used a good software before, I never bothered or thought about what are all difficulties the software faced during its
development. And now working on PyTI for over a month, I came to realize that there is a lot of sweat behind any good project.
This reminded me to write a blog on what difficulties we faced for PyTI.

If you have been following my blog , you already know that PyTI uses libvirt for virtualization and libguestfs for mounting the
file system on host. There have been couple of problems we faced using these libraries and I will illustrate how we tackled those problems. Also mentioned is the lessons I learnt through the journey.

1) Snapshot Error:
PyTI is interfaced to Virtualbox using libvirt. There was once a problem occurred that I couldnt take snapshots of any disks , except VMs attached to live boot images. After some amount of searching here and there, I realized that the disk which I was taking snapshot was already to more than one VM . So disconnecting the VM from one of the disk solved the problem .

Lesson learnt : Its always easy to blame others for mistakes. (in this case the “library” libvirt)

2) Support for shared folders:
One of the initial solution we thought of implementing for transferring data from host and guest was
by using shared folders. As you might be aware that Virtual Box has a feature of “shared folders” which allows you to access a host
folder from guest machine. Since any VM used in PyTI does not have network access, this we thought as the most viable solution.We were relying on it a big time. Soon, we realized that libvirt library does not yet support shared folder(at that time). Then we decided to
scrap the idea and go for Flux based solutions . Now we use libguestfs library.

Lesson learnt : Its easy to have a solution and assume that it will always work out.

3) Downloading files from virtual hard disk:
We had a problem downloading files from Virtual Hard disk . After some searches, we realized that VirtualBox creates a diff image
when a snapshot is taken and libguestfs tries to access from diff image and not the original disk. Having two disks solved our problem
one for IO , which we dont take snapshot and another for testing purposes which we take snapshot of (but dont require to access data from it on the host)

Lesson learnt : Read the documentation first

My experience so far

Having never so far contributed to Open Source(except for couple of patches to PSF) , it was an wonderful opportunity helping me to learn a lot about Open Source and Python(as a programming language). Realized that Software Engineering is better understood practically
than theoretically(at school). With my mentors support , I was able to learn a lot of things about how to program better , and also how to program in a Pythonic way. There has been couple of hiccups on my side, but it was always overcome with my mentors help. So, participating in Gsoc has been really amazing. With Mid-Term evaluations coming in three days, I will be a little busy , finishing over my pending work .

Work achieved so far (me and Boris):
1) Have a VManager to handler VM functionalities
2) Have a Diskhandler to transfer data to and fro the Virtual Hard disk
3) Manager functionality which synchronizes all the operations which constitutes downloading the packages, calculates the dependencies, transfer data on to the disk, call the VM, execute the tests and get the results back. (needs to be enhanced little)
4) A communication channel between master and slave(needs to be enhanced little)
5) Creating tasks for execution
6) Creating simple recipes
7) Task Manager for execution of tasks.

VManager

This particular module to be included in PyTI is one of the most important part in the architecture. Even though the main goal of vmanager is to manage the vms, there are other secondary goals like getting the data from the virtual hard disk on to the host and viceversa . Even though it is designed with PyTI in mind, I guess it can work out pretty good for the rest of the projects(except the ones which requires networking) which plan to manage VirtualMachines. Ofcourse some tinkering has to be done to cater the project needs.

I will just illustrate with a simple example to start, save the state(snapshot), rollback and stopping the virtual box.

from vms import *
config = {'name': 'test123', 'memory':'123', \
              'disk_location': 'dsl-4.4.10.iso', \
              'hd_location': 'disk.vdi'}
a = VirtualMachine('hey', "vbox:///session", config=config) 
a.start()
a.createSnapshot('hello', 'blah')
a.rollback('hello')

It is pretty clear from the above examples , each of the operations performed. Since vmanager uses libvirt library, it wont be difficult to migrate to other hypervisors if ever the need arises.

For reading the virtual hard disk , I wrote a diskhandler code , which uses libguestfs library .

A small illustration for mounting the disk , uploading and downloading data from and to the host machine

from diskhandler import *
d = DiskOperations('/home/yeswanth/a.vdi')
d.mount()
d.upload("/home/yeswanth/a.txt", "/root/a.txt")
d.download("/root/b.txt", "/home/yeswanth/b.txt")
d.close_connection()

Fore extensive read on vmanager or PyTI , please do read our documentation

Fun with VirtualMachines

Over the last one week , all I did was play with Virtual Machines. The tests on the distributions will happen over a Virtual Machine . So its very important that we can control these VMs with a script.

Candidate : VirtualBox
Configuration:
Operating System : Damn Small Linux
Virtual Hard disk : A VDI image of 2.5 GB
RAM: 256 MB
Library: used libvirt to control the virtual machines
Features tested: Start, Stop, Snapshot, Rollback

Using libvirt was nice. It supports a range of hypervisors to be controlled with the same library, though I had my fair share of difficulties especially with the documentation.

Another feature I worked on this week is mounting the virtual hard disk image on the host. I used libguestfs library to achieve this. The features I have added for PyTI for now are
uploading and downloading files from and to the virtual hard disk on to the host machine.

Would really thank Alexis, one of my mentor for his help in giving me feedback , refactoring my code and testing it .

Dependent Part of the Project

Till now I have always blogged on the environment part of the project. I should give some emphasis on the dependent part too. As you might have already guessed , Dependent part is the one which bridges the Environment part and the Execution part. So It mostly consists of API to carry information to and fro the two parts(execution and environment).

We need the following information being sent to the slave.

  • Distributions to be tested along with their dependencies
  • Configuration file

The configuration file includes :

  1. Name of the distribution
  2. Lists of tasks to be executed ( static at the beginning , configured by the slave)
  3. Where to send back information through raw data API ( probably IP address)

These are the initial idea. Having a config file means that we can include other data in the future if and when required.

So the slave sends the distributions and the vm executes the tasks as mentioned by the configuration file.
The raw data (obtained after the tests) has to be reported back to the slave(after all the tests have been conducted.
This is done by having a raw data API .
So raw data API need to accommodate these :

  1. Task id
  2. Raw data of this task + additional informations (time of execution, memory usage, cpu usage…)

We decided on using JSON-RPC format to construct Raw data API.
Core philosophy which we are following for taking decisions : Keep the implementation simple . This helps us to have a prototype early on .

Ideally I thought I should also mention the lessons I learn through gsoc . So heres the first one
Gsoc Lesson 1
I always thought that I would directly work upon coding first as soon as the bell rings in gsoc. But here over the past few weeks , I have come to realize that for a completely new project,PyTI (though it was worked upon last year), design decisions are very important than implementation. I would have been working blind if I directly started coding 🙂

Research over master slave architecture

The environment part can be divided into two parts
1) Master – Slave Architecture
2) Raw Data API (which depends on the execution project)

My last week went researching on Master – Slave Architecture .

I had looked upon two projects : one is Bitten and another is Condor both for their Master Slave Architecture and thought I should jot down some of the points which might be
helpful while designing PyTI.

Bitten

Bitten is a Python based CI for collecting various software metrics. It builds on Trac . It uses a distributed build model having a master and slave architecture where in one or more
slaves run the actual tests , and a master gathers the results .

Bitten uses a build recipe ( a configuration file which determines what commands to execute, and where certain artifacts and reports can be found after a command is executed)
Build recipes are used for communication between the master and the slave. It is written in XML format for giving commands to the slave.
Bitten’s master slave protocol is just a simple peer to peer communication protocol where in either the master or slave can initiate exchanges.

Condor

Condor is a project which is aimed at utilizing idle cpu cycles. It works on distributed systems. Any job which needs to be done is given to condor and condor finds an idle machine and executes the job there and if the user of the machine wants his machine back , it can preempt the job and move to another machine.

Now a job can be given to condor , by giving a job ad with parameters and preferences for the job and the tasks to be executed. Also machines which volunteer for using their resources also communicate with Condor with their preferences. So condor sets up the job by matching each others’ preferences. The machines communicate with condor by providing a configuration file to Condor .

PyTI (PyPI Testing Infrastructure) – My Gsoc Proposal

Project Overview

The goal of the project is to test distributions from PyPI repository to assess
quality and also to check if a distribution is malicious or not . In order to
achieve that we create a testing infrastructure for PyPI repository. There will
be a mechanism to get newly uploaded distributions from PyPI , install them in
an isolated VM environment , run tests on them (quality check , unittests) and
also determine tests they have harmful components(malicious) or not. The
project can be divided into two parts , one(environment) to subscribe to
uploaded packages,set up the environment and the other one(execution) to run
the tests and report the results back(to the environment part).

Detailed work

This project can be divided into two components : one is execution part and
another is environment part . Since each of these two parts are comprehensible
enough on its own, each will be handled by a single student. Execution part
takes cares of installing the distributions(to be tested) along with their
dependencies , run tests on these distributions and assess different quality
parameters. Tests may include unittests, or quality tests(like pep8,pylint) or
custom tests to check if the program is malicious or not .

Environment part of the project

This proposal concerns the environment part. The environment part of the
project is responsible for creating an abstraction for the execution part . It
handles delivery of distributions (and its dependencies), to the execution part
( to run tests on them). It handles all the protocols required to communicate
to the PyPI repository and also to the different architecture used in the
project. It subscribes to uploaded packages from PyPI for testing them (testing
done by the execution part). It is also responisible for setting up the
environment required for testing and to deliver the packages to the execution
part for testing them .

Terminology

  • Raw data: the data generated by tasks execution.
  • Report: evaluation of the different features/attributes of the data.
  • Task : execution which produce raw data and "output". eg build, install,
    unittest, pylint…

Architecture

  • Master – Slave architecture where the master dispatches jobs to the slave and
    the slave executes them.
  • The communication between master and slave happen through an API called
    command API
  • The slave communicates with the vm , sends the distributions require for testing
    and receive raw data (after installing the distributions and conducting tests)
    using another API called raw data API
  • Tests are run on VMs and each VM is handled by a slave

Raw Data API

  • The task is to build a raw data API for the communication between the VM and
    slave.
  • The raw data API handles sending the data into the corresponding VMs.
  • The raw data API also handles the raw data (after the execution part has finished)
    on VM to be sent to the slave

Command API

  • The task is to build a command API to communicate between the Master and the
    Slave .
  • The command API handles the task requests issued by the Master to and
    assigns them to the slave.
  • Task requests can involve different configurations to be made on a VM,
    what distributions to be tested,etc.

Implementation

  • For both the API I propose the use of XML format or json format for creating the API as I
    think both can be used easily and both have good support.

Slave

The slave performs the following tasks

  • Initialises an isolated VM and configures the VM using the configuration
    provided by the API call to it.
  • It should be able to communicate with PyPI repository and get the
    distributions to be tested.
  • Gets the distribution to be tested from the repository , computes
    dependencies and also gets the dependencies from the repository.
  • Passes all the packages to the VM.
  • Receive the raw data from the VM.

Implementation

  • The slave is required to differentiate between the different VMs and also
    have a track of the activities happening with each of the VMs.
  • The slave intializes and configures the VM by making an API call to it.
  • When the packages are sent into the VM , they can be stored in a folder
    and the execution part can keep polling into the folder to see if any package
    has been received to start testing.

Master

  • Master subscribes to uploaded packages in PyPI
  • It dispatches jobs to the Slave using command API
  • It receives the test results from the slave

Implementation

  • Inorder to subscribe packages from PyPI , we can use pubhubsubbub protocol to
    get real time feed as and when a package has been uploaded.

Example scenario

  1. Developer uploads his distribution on pypi. (External)
  2. Pypi notifies PYTI(Here the master gets notified). (Environment)
  3. Master asks a slave(local or remote machine) to test a distribution
    using command API.(Environment)
  4. Slave computes the dependencies for the distribution(to be tested) and
    downloads the distribution(to be tested) along with the dependencies from
    the repository. (Execution)
  5. Slave starts a VM with settings as instructed by the Master (Environment)
  6. When the VM has started,the slave sends distribution(along with the
    dependencies) into the VM.(Environment)
  7. Inside the VM , the distribution is installed and different tests are
    conducted on it (unittests, quality check ,etc) (Execution)
  8. At the end, raw data(data obtained by testing) is sent to slave. (Execution)
  9. Slave sends raw data to master. (Environment)
  10. Then slave shutdown the VM and cleans it. (Environment)

Pyti Infrastructure

Not been blogging in a while , but I guess I should blog more about what I am doing these days , mostly working on my Gsoc Project proposal .

Here is what I learnt after a lot discussions in the mailing list and talking to different people about the idea.

The idea goes somewhat like this :

Basically there are lot of packages in the PyPI (python packaging index) repository and it is open to everyone. So the community had decided to implement a PyTI (Python Testing Infrastructure) to test the packages which a user want to download .

The course of the project somewhat goes like this :

1) The user requests a test on a package which he wants to download.
2) Using a feed/notifier , we can schedule a init() call to a virtual manager (in this case Amazon EC2 is the most prefered option)
3) The virtual manager boots the os and then cleans up the environment for testing .
4) The packages are installed with their dependencies
5) Tests are conducted on these packages which include
1. Test suite (If it is already present )
2. pep8,pylint,maccabe
3. custom tests
6) The vm is shutdown after cleaning up the environment
7) A qa is done and the user is notified of the results