Click here to flash read.
In this paper, our aim is to briefly survey and articulate the logical and
philosophical foundations of using (first-order) logic to represent
(probabilistic) knowledge in a non-technical fashion. Our motivation is three
fold. First, for machine learning researchers unaware of why the research
community cares about relational representations, this article can serve as a
gentle introduction. Second, for logical experts who are newcomers to the
learning area, such an article can help in navigating the differences between
finite vs infinite, and subjective probabilities vs random-world semantics.
Finally, for researchers from statistical relational learning and
neuro-symbolic AI, who are usually embedded in finite worlds with subjective
probabilities, appreciating what infinite domains and random-world semantics
brings to the table is of utmost theoretical import.
No creative common's license