|Roadmap for Rules, Semantics, and Business|
|Semantic Technologies and the Cloud: Rules for the Next Generation|
|ONTORULE: Where Ontologies Meet Business Rules|
|Introduction to Machine Learning|
|Rules Technologies Inside Out: Two Expert Perspectives|
|Domain Specific Languages: Notation for Experts|
|Goal Oriented Programing|
|Effective Scaling of Long-term Memory for Reactive Rule-based Agents|
Rule-Based Automatic Management of a Distributed Simulation Environment
|Ronald Bowers, US Army Research Laboratory|
In this presentation, we will discuss the integration of a rules-based management system into the MUVES 3 simulation environment. We will present an overview of the MUVES 3 system, its architecture, and its management requirements. We will discuss the underlying Rio framework, the integration of rules into Rio, and the rules that have been developed to manage MUVES 3. Finally, we will discuss anticipated future expansion of the usage of rules within both Rio and MUVES 3.
The US Army Research Laboratory (ARL) is developing a new simulation environment to assist the laboratory in performing vulnerability/lethality (V/L) analysis. V/L analysis is the study of how resistant Army systems are to enemy attack, and conversely, how effective US systems are at defeating the enemy. The new simulation, MUVES 3, is a multi-user, network-distributed, service-oriented system. It will provide ARL analysts with a unified environment in which they can prepare their input, execute their simulation runs, and examine the results from the runs.
Managing MUVES 3 will be difficult. The deployed system will consist of potentially thousands of services distributed over hundreds of nodes. The system deploys additional services as necessary to support analysis requirements and undeploys them when they are no longer needed. As a result, different analyses place vastly different loads on the system. Additionally, the system must contend with highly variable usage patterns. Analyses have varying priorities. Some are allowed months to complete, while others might be required in a matter of days or even hours. Furthermore, analyses produce large volumes of result data. This data must be carefully managed.
Given that MUVES 3 must support over 100 concurrent users with 24/7 uptime, it is imperative that the system be managed robustly. The objective of our effort is to automate management of the MUVES 3 system. To accomplish this, we have integrated the Drools business rules management system with the Rio distributed application framework. MUVES 3 is based upon the Rio framework. The integration of Drools and Rio enables us to collect telemetry from services in the MUVES 3 environment, evaluate that telemetry against a set of business rules, and invoke commands on services within the system to carry out the decisions made by the rules system. Several system behaviors are now automatically managed using rules. These include throttling simulation job execution in accordance with available resources, scaling the number of service instances according to load, and managing the lifecycle of simulation result data.
ONTORULE: Where Ontologies Meet Business Rules
|Hugues Citeau, IBM Center for Advanced Studies|
In this session, we will first introduce the ONTORULE project: the vision the use cases, the components, and the results thus far. We will then focus on the problem of combining, at runtime, rules and a conceptual model and data represented as an OWL ontology, with a focus on the case of production rules.
The objective of the ONTORULE project is to enable users, from business executives over business analysts to IT developers, to interact in their own way with the part of a business application that is relevant to them. We believe that one essential step towards achieving that objective is the ability to separate cleanly the conceptual domain knowledge from the actual business rules, on the one hand; and the representation of the knowledge from its operationalization in IT applications, on the other hand.
- Where are the difficulties?
- Description of emulation, loose coupling and tight coupling approaches
- Discussion of the benefits and limitations of each approach
- Future directions
ONTORULE aims at exploring the consequences of such a separation of concerns at all the stages of a business rule application: domain modelling and rule acquisition, business rules and domain knowledge management, execution and inference.
ONTORULE assumes that, at runtime, the conceptual domain knowledge and the application data is represented as an OWL ontology, and that the rules apply to that ontology: in this session, we will present and discuss the problems raised by that assumption and the solutions developed in the project, with a particular focus on production rules.
We explored four approaches to representing and executing production rules against an OWL ontology:
- Emulation of the ontology in a classical business rule engines object model;
- Loose coupling between a classical RETE-based business rule engine and an OWL reasoner;
- Loose coupling between a specially adapted RETE-based engine and an OWL reasoner;
- Tight coupling of the ontological reasoning into the RETE-based engine.
The latter two approaches do not only require special implementations of the RETE algorithms, but, also, specially designed rule languages. All these will be demonstrated and discussed.
Two large industrial companies, AUDI and ArcelorMittal, define the requirements that help focus the research, and develop the pilot applications that provide the test beds to validate the technology developed in the project. Our demonstration and examples are taken from the demonstrators that the project developed for the two pilot applications.
All our approaches, including the most promising tight coupling, have severe limitations. We are currently investigating several approaches to overstep them, but we need to gain a better understanding of the users requirements to prioritize among these directions.
See the ONTORULE project Web site: www.ontorule-project.eu
The ONTORULE project is partially funded by the European Commission under grant agreement n° 231875
Introduction To MachineLearning
|Andrew Ng, Assistant Professor, Computer Science, Stanford|
Full abstract coming soon!
Effective Scaling of Long-term Memory for Reactive Rule-based Agents
|Nate Derbinsky, Post Doctoral Researcher, Computer Science, University of Michigan|
This talk is about the development and evaluation of long-term memory systems for real-time, rule-based agents that can be tasked with, and must adapt to, multiple knowledge-rich problems over extended time scales.
We research autonomous agents that are subject to a number of challenging constraints. For example, decision making must occur in real time, defined as no longer than 50 milliseconds, which means that our systems can support reactive control, such as over mobile robots. Additionally, these agents are not single-purpose, and the problems with which we task them require reasoning over a great deal of information, and learning from experience. Rule-based systems have been applied to complex reactive systems, but integrating access to large stores of information and experience imposes a number of design and implementation challenges.
In this presentation, we will first describe and analyze techniques that we developed to support efficient access to large stores of task-relevant memories, as well as detail how we integrated these capabilities within Soar, a general agent architecture. Furthermore, we provide performance evaluation of these mechanisms when applied within a variety of problems, including mobile robotics, non-player characters (NPC) within action games, planning, linguistic interpretation, and mobile music interaction.
Knowledge Wars: Becoming a Knowledge Engineer
|Rolando Hernandez, CEO, BizRules|
This presentation answers these key questions:
- What do knowledge engineers do?
- How do you become a knowledge engineer?
- What do knowledge engineers need to know?
- When do you need to use a knowledge engineer?
- What is the future of the knowledge engineering and business rule architecture profession?
Business is war. Business today is about surviving battles, complying with internal rules and external government regulations, disrupting competition, and ruling markets. Smart companies are building rule-based apps to outsmart competitors and win the battles.
Knowledge is Power. We live in a knowledge-based society and economy. Knowledge is every companyís most valuable asset. Companies that want to rule the world are building knowledge-based apps to preserve and automate their most valuable asset before it melts away.
The knowledge engineer is responsible for extracting the knowledge, harvesting the rules, and designing the rule bases and knowledge bases. He or she is responsible for designing and building the rule-based apps and knowledge-based apps that businesses use to compete, comply, and survive.
We will explore these topics:
Why business needs knowledge engineers
- How do you bridge the gap between business and IT?
- How do you liaison between business (users, experts, customers) and IT (developers)?
How do you transfer knowledge and rules in the mind into 0s and 1s in the computer?
- What questions should you ask SMEs to extract knowledge and rules?
- What do you do when the experts can't agree on the rules?
- How do you transfer knowledge and rules from SME to BRE?
What is rules analysis, design, architecture, and engineering?
- Knowledge acquisition (KA)
- Knowledge representation (KR)
How do you write business rules in five simple steps?
- Knowledge modeling (KM)
- A picture is worth a thousand words.
How do you design a rulebase?
Knowledge automation (KA)
How do you design and architect a rules-based app?
How do you engineer a rules-based system so it aligns to the enterprise architecture?
Tips, lessons learned, and best practices for Knowledge Engineers
- The Knowledge Supply Chain.
- Recommended diagramming standards for Knowledge Engineers.
- Example templates for documenting rules.
10 Things I Learned Capturing Knowledge from SMEs.
10 Things I Learned Writing Business Rules in BREs.
For those that want to learn more, you can also attend the Rules 101 session on "Rules Harvesting Live," an interactive knowledge acquisition session where the audience plays the role of SMEs and where these lessons are used to capture, model, and document rules and knowledge from experts in the audience.
Choosing Data For Rule Interchange
|Christian de Sainte Marie, IBM, ILOG|
In this session, I will:
- Introduce W3C RIF briefly
- Explain how the <Import> feature is used to combine rules interchanged in RIF with application data models and data represented as OWL ontologies or RDF graphs.
- Present the proposed extension of the RIF <Import> feature to combine the interchanged rules with XML data and application data models represented as XML schema.
- Discuss how the proposed extension satisfies the requirements for combining RIF with object models and object-oriented data in general, both syntactically and semantically, and what are the remaining challenges.
- Discuss how the <Import> feature could be further generalized to allow the combination of RIF documents with other data sources, including relational data bases.
W3C Rule Interchange Format (RIF), which became a W3C recommendation in June 2010, was an important step in making rules, that is, the executable logic of a rule-based application, a piece of data like any other interchangeable piece of data, that can be produced and maintained in one place, and published, shared and executed everywhere.
But executable rules are an active kind of data, that are meant to be combined with application data to produce inferences that, in turn, result in application data updates and other kind of actions. RIF provides a standard data model with the associated semantics and XML serialization for rules, but the consumer application of RIF rules can produce meaningful inferences only it knows for what data they are intended; that is, if the application data model that the rules assume can be interchanged along with the rules.
RIF specifies how to combine rules with OWL ontologies and RDF graphs in general . But most of the data, on the Web and elsewhere, is neither modelled nor accessible as OWL ontologies or RDF graphs: relational and object models are much more widely used. The RIF working group has published the working draft of a proposed extension to specify how to interpret the combination of RIF documents and XML data, possibly associated with XML schemas .
XML, with or without XML Schemas, one of the most widely used data interchange format, and XML-processing infrastructure can be considered almost universally available. And many industry-specific consortia specify standard XML format for wide ranges of industry-specific data, thus making standard XML DTDs and XML Schemas widely used in many application domains.
In addition, in this presentation, I will show that the proposed extension satisfies most of the requirements for combining RIF with object models and object-oriented data, both syntactically and semantically. The main unresolved issue is how to publish and interchange the semantics of interpreted functions (such as object methods): the absence of a standard may limit dramatically the usefuleness of rule interchange.
References and downloadable resources
The Wiki of the W3C RIF working group, with all the useful links (latest versions of working drafts and technical notes, mailing lists etc): http://www.w3.org/2005/rules/wiki/RIF_Working_Group.
Using Constraint Solvers as Inference Engines to Validate and Execute Rules-based Decision Models
|Jacob Feldman, OpenRules, Inc.|
This presentation describes how to use constraint solvers as generic inference engines in the context of modern business decision management systems.
Similarly to traditional Rete-based rule engines, a constraint-based engine supports declarative inferential relationships between multiple rule families and does not assume any rules ordering within rule families. It verifies rules consistency and points to possible conflicts between rules across the entire decision model, validates input data, and executes the decision model delivering results in business terms. The proposed approach does not require additional code or any changes to be made to the representation of decision models created by business users.
In fact, a user may switch between rule engine and constraint solver without changing rule families. Additionally to traditional rule engine functionality, a constraint-based inference engine can find decisions when business rules define a problem only partially and can find decisions that optimize certain business objectives.
KMR-II: A Knowledge Management Architecture For Healthcare
|Emory Fry, US Navy|
This presentation is about the engineering and development of the Knowledge Management Repository (KMR-II). KMR-II provides integrated knowledge management, analytic, and predictive modeling capabilities critical to the immediate and long-term care of our patients.
As a sophisticated, standards-based Clinical Decision Support environment, it is uniquely suited to deliver knowledge services that can be layered on a variety of health information networks. It represents the final stages of almost five years of development utilizing open source, open standards, and collaborative engineering efforts.
Faced with increasing costs, regulatory over-sight, and an explosion in clinical information, health care organizations desperately need better tools to efficiency manage their workforce, to improve access to care, and to align available resources with projected health care demands. Patients increasingly expect their care to not only reflect best clinical practice, but also to respect their individual preferences and values. Delivering cost-effective, quality health care under these conditions will demand more analysis, coordination, and anticipatory foresight than any one provider or team can deliver without assistance.
Knowledge Management Repository (KMR-II) is a second generation Clinical Decision Support (CDS) platform for health care environments. It provides a standards-based object model for managing the structure and semantics of data obtained from local and distributed storage repositories. This canonical fact model can then be reasoned over using the knowledge management, business intelligence, and predictive analytic technologies required for advanced cognitive and workflow optimization.
An Event Driven Architecture (EDA), combined with a Service Oriented Architecture (SOA), is used to deploy and manage these capabilities. The EDA handles the temporal aspects required for effective Clinical Decision Support, including initiating appropriate analytic processing in response to real-time events. Triggers can be messages, for example the HL7 transaction sets used to communicate laboratory results, or patient monitor waveforms that require Complex Event Processing to be handled effectively. The initiated workflows are then managed using SOA components, each service ensuring that core business logic is well-abstracted, reusable, and encapsulated behind standards-based interfaces. A commodity workflow engine provides the advanced process orchestration and state management critical for executing complex clinical guidelines and treatment plans.
A Production Rule engine is utilized to a) capture and encode clinical domain expertise, b) ensure process validity with respect to declarative constraints, and c) provide flexible control over application/middle tier behavior. A design principle unique to our approach is that rule and workflow processing is done in a patient specific context, each session being dynamically instanced and provisioned with select knowledge bases and individualized preferences. This design, while resource intensive, ensures personalized, high performance rule evaluations.
Not all clinical decisions are best approached with predicate logic - some require alternative inference techniques. To expand the analytic capabilities available, we implemented a Predictive Model Markup Language (PMML) infrastructure, the de facto standard used to represent predictive models, so that resource-capacity planning, risk-assessment, and diagnostic models could be plugged into the Clinical Decision Support architecture.
Roadmap for Rules, Semantics, and Business
|Paul Haley, Automata|
Keynote: Decades of incremental development of primarily forward chaining production rule systems have cleaved the business rule engine market from the knowledge-based systems and artificial intelligence technology of the eighties.
Today, artificial intelligence such as IBMs Watson, Wolframs Alpha, Vulcans Halo and all of the activity related to the semantic web demonstrate that semantic technology is inevitably mainstream and yet remains largely divorced from the activity of knowledge engineering as practiced using business rule engines embedded within business process management or complex event processing systems.
In effect, there is a chasm between the enterprise use of rules and the world-wide tsunami of semantic technologies. Although rules alone are powerful and productive programming technology, the production rule metaphor is not well-suited for semantics, broadly speaking. Understanding requirements for semantic technology in enterprise versus the capabilities of rules engines exposes the need for more classic artificial intelligence and reasoning in our technology, more emphasis on knowledge than rules in our services, and more emphasis on knowledge technology strategy in the executive suite.
Using Rules to Build Languages
|Brian Jones, Grindwork Corporation|
This presentation is a How-To talk on using rules in place of YACC/Antlr for building languages (whether DSL or true interpreter/compiler).
I want to be heavy in the code. There's many aspects of compilation and generation that give people problems where rules are very nice:
- Error reporting with sufficient detail to highlight in an IDE
- Error recovery so that the parse can continue in a sane way without just stopping (to get the most errors possible for a given run)
- Managing the scopes and symbol trees without needing complex AST
- Detecting completion of compilation
- Code generation without having to manually process ASTs
My preference is to take a simple language and show the code to enable feature by feature, such that the rule techniques are exercised and problems encountered are shown and solved.
The learning objective is to enable people to use the rule engine as part of the process of building the system by expanding the tools the development team itself uses into the realm of custom languages (DSL or true replacement languages). The relevance is for developers to provide another domain for the invested tools. Many presentations are about using rules for the problem domain but not how to use them in the solution domain. The rule engines are totally capable but under-utilized. The context is on the development side.
I will be showing the effects of the rules continuously, so this is not a demo in the sense of "here is a product" but "here, the effects of the rules did this ... oh, that's not right! What happened?"
Domain Specific Languages: Notation for Experts
|Wolfgang Laun, Thales Austria|
Domain Specific Languages (DSL) are acclaimed for bridging the gap between domain expert and programmers, permitting the former to author the salient decision logic themselves. The speaker presents and discusses two different approaches, both originating from work at Thales Austria.
The first one uses decision tables (DT) embedded in Java programs. The classical DT structure uses sets of truth values and actions and defines rules as tuples of boolean values and selected actions. The solution, implemented as an Eclipse plugin, has been successfully deployed in a couple of applications. One of them deals with the rendering of graphical representations of an element set with a large number of possible states. The other one handles state transitions in an element controller for railway signals using LED technology. Reports from project development and maintenance phases show that this technique is a useful extension for dealing with complex decision problems.
The second one demonstrates the development of a DSL close to a natural language, designed for writing rules for establishing train routes in an electronic interlocking system. This work was intended to demonstrate the suitability of rules written in a DSL to act as a specification for complex requirements that are both fully understandable by non-programmers and executable in a rule-based application. The implementation is based on JBoss Drools, using its DSL expander for the translation from DSL to the native rule language.
Experience shows that the complexity of DSL design is commensurate with the intricacy of the logic of the problem domain, but also that a structured approach can be employed beneficially. A discussion of the intrinsic limitations of generic DSL translation and of the dos and don'ts of this technique concludes this section.
Agile Knowledge Elicitation: Leveraging use-cases for an effective harvesting of tacit knowledge
|Carole-Ann Matignon, Sparkling Logic|
This talk presents a new approach to knowledge elicitation that combines Agile and AI concepts for the modern usage in Decision Management systems.
In particular, attendees will learn how to accelerate harvesting time and increase the quality of the extracted business rules at the same time.
Tell us your business rules and we will execute them. This deceptively simple promise of Business Rules Management Systems underestimates the pain felt by practitioners going through their first project. Turning tacit knowledge into executable business rules is a difficult task.
The famous quote from Michael Polanyi, We know more than we can tell, summarizes beautifully the challenges faced by business users, business analysts and rules architects. Although partially documented in regulations and business manuals, knowledge is mostly buried deep in the head of knowledge workers and simply asking for it is easier said than done.
- Is the resulting body of rules comprehensive enough?
- Is it specific enough?
- Is it correct and accurate?
In the 1980-90s, the Artificial Intelligence community invested heavily on various techniques for expert interviews to tackle this very problem. With the winter of AI, expert systems became less popular and so did those efforts. Experts were too few and their time too valuable to participate in those time-consuming interviews.
More recently, Agile Programming transformed development cycles by, among other things, bringing test cases at the forefront of the effort. Communication between product managers and developers has improved by discussing requirements in the context of use cases well established up-front.
Deploying Knowledge Based Technologies in Embedded Systems
|Alan Moore, AJA Video Systems, Inc.|
This presentation will provide a detailed examination of how KB technologies have been deployed in problem domains that do not require AI problem solving techniques but nonetheless have benefited from the use of KB technologies. The focus will be on the integration techniques used and the design patterns employed.
The deployments to be discussed will start with the design of the ICAD System at Cal Poly SLO, an architectural CAD system integrated with a distributed network of KB experts that provided near real-time feedback within the drawing context.
Next, a discussion of two deployments that operated in similar problem domains. The Intel Smart-TV system, a television set-top box, was designed to automatically adjust to the current userís preferences and behaviors. In addition to providing an enhanced TV viewing experience it also targeted advertisements and program previews to the user based on previous behavior and choices. The other deployment was at a startup named Ten Square that targeted advertisements and coupons based on user behaviors. It was deployed in gasoline pumps, ATMs, PIN Pads and other POS devices.
Third, we will examine in detail the design and implementation of the Ciphergen Biosystems/BioRAD SELDI TOF Mass Spectrometer device control software. This innovative network attached device enables protein identification via spectrum data collection and offline analysis. The device has an embedded rule engine that provides overall device control and enables advanced spectrum data collection and device calibration protocols.
Next, the ongoing design of several AJA Video Systems devices will be discussed along with several potential deployments of KB technologies within these embedded systems. Current challenges and possible design alternatives will be outlined.
Finally, a review of the lessons learned from the discussed deployments will examine design patterns and implementation strategies for further development.
Using RuleBased Systems for Forecasting
|James Owen, KnowledgeBased Systems Corporation|
This presentation is about using rules for forecasting, and it will cover stationary, seasonal influence and/or cycles, and moving averages of various magnitudes.
This presentation will use mostly one set (maybe two) of data to show the difference in how these are applied. Working from definitions from last year, this paper will explain these terms and show how they operate within one or more data sets. Moreover, we will consider the difference in when one procedure should be preferred to another.
It has been said that "Forecasting can NEVER become a substitute for prophecy..." (Makradakis, 98). However there is a need for Short Term Forecasting, Mid-Range Forecasting and Long-Range Forecasting. Forecasting can be a simple as Single Linear Regression (SLR) or as complex as Multivariant Econometric. forecasting, Brown's Parameter Quadratic, Holt's or Brown's Exponential Harmonic smoothing, Box-Jenkins Adaptive, Chow's Adaptive, nor Neural Network Times Series forecasting.
Goal Oriented Programing
|Mark Proctor, Red Hat|
Rules based systems focus on conflict resolution, ruleflow, control facts and other similar techniques to control execution . Orchestration and choreography are two other terms often associated with execution control. This talk will discuss those concepts and will introduce Goal Oriented Programming as a robust way to deal with execution in your application.
We will explore both goals in patterns, via opportunistic backward chaining, and goal oriented agents. Collaboration techiques for goals will be discussed and concepts such as Semantic Reasoning and Belief Desire Inteion models will be touched on, while explaining the concepts of goals.
Rules, Processes, and Complex Event Processing
|Mauricio Salatino, Plug Tree|
This talk is a practical example of integrating rules into a complex business solution.
The Emergency Service Demo Application shows how a company deals with emergencies that happens in a city. A set of Business Process are defined to handle different emergencies situations. The Business Processes define the steps required by the company to bring a fast and secure service to each different emergency. A group of Business Rules are defined to help us during each emergency. In this case rules are used for automatic decisions and as real time suggestion mechanism to speed up the service times and to improve the quality of service.
For each emergency, the company will select and coordinate a set of entities (e.g., the ambulance service, the city police department and the firefighters) to take control over a specific situation. Different reasoning techniques will be used to guide and assist the entities involved in the emergency such as: business process management for emergencies procedures definitions and enforcement, business rules for suggestions and dynamic decisions depending on the context and complex event processing for real time monitoring and reaction mechanisms.
After quickly reviewing the concepts of Business Process Management, Business Rules Engine and Complex Event Processing the Demo Application will take place.
Executing Processes, Taking Decisions and Detecting Situations
|Daniel Selman, IBM WebSphere ILOG BRMS|
During this session Daniel will explain the conceptual relationships between business processes, decisions (often implemented using rules) and detecting interesting situations (often called complex or business event processing). Each of these three domains has a long history, academic foundation, dominant modelling representations and execution algorithms. Daniel presents tools/techniques at a conceptual level and describes some of the challenges for the future.
Increasingly, architects are trying to build applications that marry the strengths of business process management with business rules for taking complex decisions, and CEP/BEP for detecting and reacting to interesting situations. These applications are capable of reacting to changing business conditions more dynamically than in the past and place more control in the hands of end-users.
Rules Technologies Inside Out: Two Expert Perspectives
|Charles Forgy, Production Systems Technologies|
|Carlos Serrano-Morales, Sparkling Logic|
In this moderated chat, Charles Forgy and Carlos Serrano-Morales will discuss the history, successes, failures, lessons learned of rules technologies and their aspirations for their future.
Charles, the inventor of Rete, will reveal his thoughts on algorithms, processor and communication architectures, parallelism, and event-driven programming.
Carlos, a thought-leader and pioneer of artificial intelligence, focusing on expert systems, business rules, decision management, and now social logic, will share insights on how these technologies get leveraged in mission-critical business applications, and what is expected from them now and in the future, with a particular emphasis on expression power, execution capabilities and integration with other promising technologies.
Advances in inference technologies have enabled the design and implementation of unprecedented decision-heavy applications; and increasing requirements from those applications have sparked a significant amount of innovation in inference technologies. Both trends feed of each other to advance the state of the art in rules technologies.
The two perspectives represented by the speakers reflect these two trends. They overlap, but also con-trast. In this discussion, the speakers, who have helped shape rules and decision management technologies, will discuss what has worked, what opportunities were missed, as well as what the aspira-tions and perspectives are for rules technologies.
Scalability in a Real-Time Decision Platform
|Kenny Shi, ebay|
In this presentation, fraud detection and risk management platform at eBay will be used as an example to share the best practices for achieving scalability, from two aspects: technologies and processes.
Now that youve started building automation for your operational decisions, its time to think about how to scale up and improve performance of your decision platform as your business grows.
In this competitive business world, decisions need to be made at more checkpoints within your constantly growing business processes, while they continue to be made faster, smarter, and more adaptive. This is one of the key differentiators in a transactional application. The ability to achieve scalability puts you ahead of your competitors and wins you more customers.
Decisions need to be treated as assets of your business. A good decision platform should be able to scale horizontally without increasing complexity and overhead. We will go through the topics such as software stacks, physical deployment model, distributed rules runtime, distributed model runtime, using BRMS to achieve variable interoperability between rules and models, NOSQL variable caching, inter--- process communication between different decision points, decision simulation and testing framework on Hadoop, and automated rules discovery through data mining.
We will also look at some of the processes that have worked at eBay, such as pre--- production statistical testing, phased markup of new rules with built---in monitoring and reporting, analytical dashboard on rules hit rates and catch rates, buddycheck and approval chain among rule analysts, massive adaptation of rules after business context/model shifts.
Managing Imperfect Information Using Imperfect Rules : Approximate Guidelines
|Davide Sottara, University of Bologna|
The goal of this deep-dive talk is to discuss how the use of "imperfect" logics could improve the expressiveness of a language used to define a rule base, as well as the reasoning capabilities of an associated engine. Such logics include, but are not limited to, various flavors of fuzzy logics, possibilistic logics, probabilistic and belief logics and combinations thereof.
"Perfection is an ideal condition that can hardly be reached: while one should always strive to achieve it, one should also be content with what they can really get in practice."
While arguably reasonable, the principles of this statement are almost always disattended in rule-based systems. In struggling to use the perfection of boolean logic to model an idealized version of human inference processes, rules often just capture idealized models of the domains they pretend to describe. While boolean logic is precise and certain, in many real cases both the available information and the reasoning processes to be applied are vague, imprecise, ill-defined and/or uncertain: in a single word, imperfect (adapted from P. Smets).
When this aspect is part of the nature of an applicative domain, ignoring it, rather than handling it, does not yield more robust models, but effectively accounts for a loss of information.
Different logic frameworks are more suitable to handle different aspects of imperfect information. The talk will focus on what kind of rules can be written in the context of each framework, comparing them to their boolean counterparts, and what knowledge modelling problems they are better suited to solve. While no specific design patterns formally exist, a number of common scenarios will be presented. For each one, modelling criteria and rule design guidelines will be proposed, analyzing aspects including the source of the imperfection, its proper nature, the consequences which can be logically inferred and the constraints making this inference valid.
As a part of the best practices discussion, it will be shown that the native support for imperfection is a key feature for the tight integration of rule bases with tools such as predictive/classification/regression models. While usually based on quantitative rather than qualitative approaches, they are complementary tools to rules, but also a main source of imperfection, so it will be shown how their seamless but coherent integration can enrich a business rule base.
Semantic Technologies and the Cloud: Rules for the Next Generation
|Said Tabet, RuleML|
Keynote: Today, we are facing major challenges in information management and at the same time witnessing a drastic shift with the emergence of Cloud computing and the progress made with Semantic Technologies.
There are many offering of BRMS solutions and products including open-source. Unfortunately, the trend recently is constraining AI and knowledge/logic technologies within the procedural model of existing legacy environments. Enterprise knowledge management needs strong semantic technologies, powerful inferencing systems, and advanced machine learning capabilities, not yet another sophistication of spreadsheets and related frameworks. In order to fully embrace the power of rule-based approaches, businesses will need to be inspired and adopt relevant standards and best practice. There are very specific areas such as Cloud trust, information governance, risk management, and compliance, distributed systems, mobile applications offering a unique opportunity to realize the true potential of knowledge technology.
To move mission-critical applications to the Cloud, a high level of trust and assurance is needed. Trust includes service level agreement negotiation and enforcement, a rule-based problem. Innovation in distributed rules and overall logic-based technology will help face the challenges of elasticity, co-tenancy, privacy and security, dynamic policy and intelligent data center management and analytics.
Event-Driven Rules: Experiences in CEP
|Paul Vincent, Tibco|
Here we present an introduction to rule-driven CEP followed by case studies (from the userbase of TIBCO Softwares CEP technology, and from earlier user group presentations) on how declarative rules provided a suitable knowledge representation for event-driven processes in business applications, covering how and why event-driven rules are applied to best solve either scalability, complexity, RAD (or combinations thereof) challenges.
Event Driven Architectures and Complex Event Processing are demonstrating interesting alternatives to the app-server-executing-business-logic approaches that are prevalent in IT today. Although there are not many pure rule-driven CEP tools around (i.e. classed as rule engines), the benefits of the event-decision-action pattern these allow have proved very useful in a number of application cases.
Domain-Specific Language and Rules Engine Implemented in Python
|Michael Walsh, MITRE|
This presentation is about the design and development of a rules engine and an accompanying DSL ("Domain specific language") for expressing policies to orchestrate and control a dynamic network defense cyber-security platform being researched here at MITRE in its Innovation Program.
In the past, the platform authored in Python relied on a set of property files being authored and later parsed by business logic embedded into the program-flow of the platform to confer how it was to respond to network events mounted over covert network channels. By replacing these instances of code with an instance of a rules engine loaded with one or more policies and the discrete present conditions of the network, the platform is allowed to reason and direct itself.
As there were no rules engines available for Python, a policy language and forward chaining rules engine were built from scratch. The policy language's grammar is based on a subset of Python language syntax, and implemented both the parser and lexer with the help of the ANTLR3 Parse Generator and Runtime for Python. The interpreter, the rules engine, and the remainder of the code such as objects for conferring discrete network conditions were also authored in Python. Pythons approach to the object-orientated programming paradigm, where objects consist of data fields and methods, did not easily lend itself to describing these discrete network conditions. Because the data fields of a Python object referred to syntactically as attributes can and often are set on an instance of a class, they will not exist prior to a classs instantiation. In order for a rules engine to work, it must be able to fully introspect an object instance representing a condition. This proves to be very difficult unless the property decorator with its two attributes, getter and setter, introduced in Python 2.6, are adopted and formally used for authoring these objects. Coincidentally, the use of the Getter/Setter Pattern used frequently in Java is singularly frowned upon in the Python developer community with the cheer of Python is not Java.
Starting out, it was initially assumed the platform would be integrated with the best Open Source rules engine available for Python as there are countless implementation for Ruby, Java, and Perl, but surprisingly found none fitting the project's needs. This led to the thought of inventing one; simply typing the keywords python rules engine into Google though will return to you the advice to not invent yet another rules language, but instead you are advised to just write your rules in Python, import them, and execute them. The basis for this advice can be coalesced down to doing so otherwise does not fit with the Python Philosophy. At the time, I did not believe this to be true, nor fully contextualized, and yet admittedly, I had not yet authored a line of Python code nor used ANTLR3 prior to this effort. Looking back, I firmly believe the act of inventing a rules engine and abstracting it behind a nomenclature that describes and illuminates a specific domain is the best way for the network defender to think about the problem.
Achieving Scalability in Rule Based Systems
|George Williamson, Union Pacific Railroad|
This talk will present the use case of Union Pacifics Car Scheduling Yard Block Assignments and examine the strategies used to achieve high performance, satisfy memory size limitations, and deliver a system that is both horizontally and vertically scalable.
Union Pacific Railroad generates revenue by moving railcars from one location to another. Hence, Car Scheduling is one of the railroads most critical operations. One of the first steps in scheduling a railcar is classifying it into a yard block, which is a grouping of railcars that move together as a unit until, and possibly beyond, the next classification rail yard.
Yard blocking rules are defined at each of the 1,180 scheduling locations and are composed of approximately 20,000 moderately complex, hand-written business rules that are suitable for implementation using rules-based technologies. Meanwhile, there are over 80,000 railcars owned and operated by Union Pacific, each of which is scheduled frequently. Each scheduling request, in turn, results in several yard block assignments, one for each scheduling location traversed by the car through its schedule. Processing this much data in a timely manner through a rules-based system introduces scalability issues that must be addressed.