DGM is a cross-platform C++ library implementing various tasks in probabilistic graphical models with pairwise or complete (dense) dependencies. The library aims to be used for the Markov and Conditional Random Fields (MRF / CRF), Markov Chains, Bayesian and Neural Networks, etc. Specifically, it includes a variety of methods for the following tasks:
- Learning: Training of unary and pairwise potentials
- Inference / Decoding: Computing the conditional probabilities and the most likely configuration
- Parameter Estimation: Computing maximum likelihood (or MAP) estimates of the parameters
- Evaluation / Visualization: Evaluation and visualization of the classification results
- Data Analysis: Extraction, analysis and visualization of valuable knowledge from training data
- Feature Extraction: Extraction of various descriptors from images, which are useful for classification
These tasks are optimized for speed, i.e. high-efficient calculations. The code is written in optimized C++17, compiled with Microsoft Visual Studio, Xcode and GCC and can take advantage of multi-core processing as well as GPU computing. DGM is released under a BSD license and hence it is free for both academic and commercial use.
The first example considers a challenging problem in remote sensing: the precise classification and reconstruction of urban areas from aerial images. Here we took airborne color-infra-red images with the corresponding digital surface models (terrain elevation maps), recieved with an aircraft-based visual sensors. The DGM library was used here for joint object detection (e.g. cars (red color)) and Earth surface classification (grass (green color), road (gray color), sidewalks (blue color), etc.)
Please visit the project Web-Page for more details.
The algorithms for accurate eyelids, iris and pupil- extraction are all currently being of a high demand as they allow for challenging technological developments as robot-assisted surgery, biometrics, gaze tracking, etc. The input image here is a photo of an eye. The second image shows the per-pixel classification result and the third image – the same result after applying the DGM-based Conditional Random Field. Here, the CRF-based classifier has been trained only on 22 images, segmented manually.
Four different colors represent eyelids (yellow), sclera (red), iris (blue) and pupil (gray)
In this example we apply DGM-based Markov Random Field to the stereo correspondence problem. The disparity map between left and right images of a stereo pair is estimated in form of a class map, where every class represents a small range of disparity values. The following result was achieved by applying approximate tree reweighted inference algorithm and Potts smoothness term, thus, no training was needed.
Please refer to Demo Stereo for the source code and more details.
The labeling of Environmental Microorganisms (EM) which help decomposing pollutants, plays a fundamental role for establishing sustainable ecosystem. Here we propose an EM classification engine that can automatically analyze microscopic images using DGM-based Conditional Random Fields. Our CRF model localizes and classifies EMs by considering the spatial relations among local and global features.
Images from left to right: Input image of Keratella Quadrata; Extracted local features; and resulting segmentation. Please visit the project Web-Page for more details.
If you have applied the DGM library in your work and achieved some interesting results, please send them to me via firstname.lastname@example.org, so I could allocate them at this web-page. This will help us to spread and improve the library. We thank you in advance for your support.
DGM in Publication
To reference DGM in a publication, please include the library name and a link to this website [BibTeX]. You may also want to include the library version, since we currently update the software.
If you use this software in a publication, please cite the work using the following information:
Sergey Kosov. Direct graphical models C++ library. http://research.project-10.de/dgm/, 2013.
or using the BibTeX file.