About

pracmln is a toolbox for statistical relational learning and reasoning and as such also includes tools for standard graphical models. pracmln is a statistical relational learning and reasoning system that supports efficient learning and inference in relational domains. pracmln has started as a fork of the ProbCog toolbox and has been extended by latest developments in learning and reasoning by the Institute for Artificial Intelligence at the University of Bremen, Germany.

pracmln was designed with the particular needs of technical systems in mind. Our methods are geared towards practical applicability and can easily be integrated into other applications. The tools for relational data collection and transformation facilitate data-driven knowledge engineering, and the availability of graphical tools makes both learning or inference sessions a user-friendly experience. Scripting support enables automation, and for easy integration into robotics applications, we provide a client-server library implemented using the widely used ROS (Robot Operating System) middleware.

  • Markov logic networks (MLNs): learning and inference Fuzzy-MLN reasoning, probabilistic reasoning about concept taxonomies.

  • Logic: representation, propositionalization, stochastic SAT sampling, weighted SAT solving, etc.

This package consists of an implementation of Markov logic networks as a Python module (pracmln) that you can use to work with MLNs in your own Python scripts. For an introduction into using pracmln in your own scripts, see API-Specification.

Release notes

  • Release 1.2.4 (17.05.2019)

    • Fixed installation issues with pip

    • Minor fixes.

  • Release 1.2.2 (18.12.2017)

    • Support for Python 2 and Python 3

    • Release a pip-compliant package

    • Minor fixes

  • Release 1.1.2 (14.03.2017)

    • Fix: Patches for using toulbar2 on Windows platforms

  • Release 1.1.1 (13.03.2017)

    • Fix: Patches for Windows support

  • Release 1.1.0 (13.06.2016)

    • Fix: C++ bindings

    • Feature: literal groups for formula expansion (see Grouping Literals)

    • Fix: existentially quantified formulas evaluate to false when they cannot be grounded

    • Fix: cleanup of process pools in multicore mode

Citing

When you publish research work that makes use of pracmln, we gratefully appreciate if a reference to pracmln can be found in your work in the following way:

  • Nyga, D., Picklum, M., Beetz, M., et al., pracmln – Markov logic networks in Python, http://www.pracmln.org, Online; accessed May 17, 2019.

The following Bibtex entry can be used for documents based on LaTeX:

@Misc{,
    author =    {Daniel Nyga and Mareike Picklum and Michael Beetz and others},
    title =     {{pracmln} -- Markov logic networks in {Python}},
    year =      {2013--},
    url = "http://www.pracmln.org/",
    note = {[Online; accessed <date>]}
}

Credits

Lead Developer

Daniel Nyga (nyga@cs.uni-bremen.de)

Contributors

  • Mareike Picklum

  • Ferenc Balint-Benczedi

  • Thiemo Wiedemeyer

  • Valentine Chiwome

Former Contributors (from ProbCog)

  • Dominik Jain

  • Stefan Waldherr

  • Klaus von Gleissenthall

  • Andreas Barthels

  • Ralf Wernicke

  • Gregor Wylezich

  • Martin Schuster

  • Philipp Meyer

Acknowledgments

This work is supported in part by the EU FP7 projects RoboHow (grant number 288533) and ACAT (grant number 600578):

_images/robohow-logo.png _images/acat-logo.png _images/fp7-logo.png

Publications

  1. Dominik Jain. Knowledge Engineering with Markov Logic Networks: A Review. In DKB 2011: Proceedings of the Third Workshop on Dynamics of Knowledge and Belief. 2011.

  2. Dominik Jain and Michael Beetz. Soft Evidential Update via Markov Chain Monte Carlo Inference. In KI 2010: Advances in Artificial Intelligence, 33rd Annual German Conference on AI, volume 6359 of Lecture Notes in Computer Science, 280–290. Springer, 2010.

  3. Dominik Jain, Paul Maier, and Gregor Wylezich. Markov Logic as a Modelling Language for Weighted Constraint Satisfaction Problems. In Eighth International Workshop on Constraint Modelling and Reformulation, in conjunction with CP2009. 2009.

  4. Gheorghe Lisca, Daniel Nyga, Ferenc Bálint-Benczédi, Hagen Langer, and Michael Beetz. Towards Robots Conducting Chemical Experiments. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany, 2015.

  5. Daniel Nyga. Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning. PhD thesis, University of Bremen, 2017. URL: http://nbn-resolving.de/urn:nbn:de:gbv:46-00105882-13.

  6. Daniel Nyga, Ferenc Balint-Benczedi, and Michael Beetz. PR2 Looking at Things: Ensemble Learning for Unstructured Information Processing with Markov Logic Networks. In IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China, May 31-June 7 2014.

  7. Daniel Nyga and Michael Beetz. Everything Robots Always Wanted to Know about Housework (But were afraid to ask). In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vilamoura, Portugal, October, 7–12 2012.

  8. Daniel Nyga and Michael Beetz. Cloud-based Probabilistic Knowledge Services for Instruction Interpretation. In International Symposium of Robotics Research (ISRR). Sestri Levante (Genoa), Italy, 2015.

  9. Daniel Nyga and Michael Beetz. Reasoning about Unmodelled Concepts – Incorporating Class Taxonomies in Probabilistic Relational Models. In Arxiv.org. 2015. Preprint: http://arxiv.org/abs/1504.05411.

  10. Daniel Nyga, Mareike Picklum, and Michael Beetz. What No Robot Has Seen Before – Probabilistic Interpretation of Natural-language Object Descriptions. In International Conference on Robotics and Automation (ICRA). Singapore, 2017. Accepted for publication.

  11. Daniel Nyga, Mareike Picklum, Sebastian Koralewski, and Michael Beetz. Instruction Completion through Instance-based Learning and Semantic Analogical Reasoning. In International Conference on Robotics and Automation (ICRA). Singapore, 2017. Accepted for publication.

Indices and tables