Direct Graphical Models  v.1.7.0
Demo Dense

This demo give a short introduction in using the DGM library for working with complete (dense) graphical models. A complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. The application of regular edge potentials used for pairwise graphs makes the inference practically impossible, thus special edge models for dense graphs should be applied.

We start this demo in the same way as the Demo Train where we used pairwise graphical model:

#include "DGM.h"
#include "VIS.h"
#include "DGM/timer.h"
using namespace DirectGraphicalModels;
int main(int argc, char *argv[])
{
const Size imgSize = Size(400, 400);
const int width = imgSize.width;
const int height = imgSize.height;
const byte nStates = 6; // {road, traffic island, grass, agriculture, tree, car}
const word nFeatures = 3;
if (argc != 7) {
print_help(argv[0]);
return 0;
}
// Reading parameters and images
Mat train_fv = imread(argv[1], 1); resize(train_fv, train_fv, imgSize, 0, 0, INTER_LANCZOS4); // training image feature vector
Mat train_gt = imread(argv[2], 0); resize(train_gt, train_gt, imgSize, 0, 0, INTER_NEAREST); // groundtruth for training
Mat test_fv = imread(argv[3], 1); resize(test_fv, test_fv, imgSize, 0, 0, INTER_LANCZOS4); // testing image feature vector
Mat test_gt = imread(argv[4], 0); resize(test_gt, test_gt, imgSize, 0, 0, INTER_NEAREST); // groundtruth for evaluation
Mat test_img = imread(argv[5], 1); resize(test_img, test_img, imgSize, 0, 0, INTER_LANCZOS4); // testing image

But here for utilizing complete graphical model we will use DirectGraphicalModels::CGraphKit factory with the parameter DirectGraphicalModels::GraphType::dense.

Please note that the same demo could be used with pairwise graphical model. For that please use in factory DirectGraphicalModels::GraphType::pairwise instead. In such case the only difference

here with the Demo Train will be the use of default edge model, which is training-data-independent.

auto nodeTrainer = CTrainNode::create(Bayes, nStates, nFeatures);
auto graphKit = CGraphKit::create(GraphType::dense, nStates);
CMarker marker(DEF_PALETTE_6);
CCMat confMat(nStates);

Here we can omit the graph building stage (as we do not train the edges model) and strat direcly with the second stage - training the node potentials:

// ========================= STAGE 2: Training =========================
Timer::start("Training... ");
nodeTrainer->addFeatureVecs(train_fv, train_gt);
nodeTrainer->train();
Timer::stop();
// ==================== STAGE 3: Filling the Graph =====================
Timer::start("Filling the Graph... ");
Mat nodePotentials = nodeTrainer->getNodePotentials(test_fv); // Classification: CV_32FC(nStates) <- CV_8UC(nFeatures)
graphKit->getGraphExt().setGraph(nodePotentials); // Filling-in the graph nodes
graphKit->getGraphExt().addDefaultEdgesModel(100.0f, 3.0f);
graphKit->getGraphExt().addDefaultEdgesModel(test_fv, 300.0f, 10.0f);
Timer::stop();

Please note that in the third stage we have added two default edges models. For complete graphs we can use multiple edge models, wich will be applied one after another during the iterations of the inference process.

For pairwise graphs only the last added default edge model will be in use.

Check the documentation for DirectGraphicalModels::CGraphDenseExt class for information about creating and using more sofisticated edge models for dense graphs.

The decoding and evaluation stages are also the same as in the Demo Train project:

// ========================= STAGE 4: Decoding =========================
Timer::start("Decoding... ");
vec_byte_t optimalDecoding = graphKit->getInfer().decode(100);
Timer::stop();
// ====================== Evaluation =======================
Mat solution(imgSize, CV_8UC1, optimalDecoding.data());
confMat.estimate(test_gt, solution);
char str[255];
sprintf(str, "Accuracy = %.2f%%", confMat.getAccuracy());
printf("%s\n", str);
// ====================== Visualization =======================
marker.markClasses(test_img, solution);
rectangle(test_img, Point(width - 160, height - 18), Point(width, height), CV_RGB(0, 0, 0), -1);
putText(test_img, str, Point(width - 155, height - 5), FONT_HERSHEY_SIMPLEX, 0.45, CV_RGB(225, 240, 255), 1, LineTypes::LINE_AA);
imwrite(argv[6], test_img);
imshow("Image", test_img);
waitKey();
return 0;
}