A principal focus of legal-tech software companies has long been efficiency: helping lawyers and legal organizations to automate routine tasks, be more productive and eliminate ‘busywork’. Across different kinds of applications this has meant, for example, standardizing legal forms, digitizing records and making them more accessible and useful, and reducing costs of storing and retrieving various kinds of legal information.
In past decades, most software written in the legal-tech space used logic-based computer programming routines to perform simple data processing tasks: e.g., storing information, retrieving information based on exact rules given by the user, and executing business rules as defined by the user. For example, a legal document-creation application would be programmed to let lawyers input their companies’ required policies, clauses and acceptance criteria, and then to apply these criteria (fairly explicitly) to incoming contracts, in order to flag contractual problem areas for human readers and ensure the firm’s business standards were followed.
In recent years, however, more and more legal-tech have employed *Artificial Intelligence (AI)* methods to deliver better and more powerful results. ‘AI’ is a very broad term, defined by the early AI pioneers Marvin Minsky and John McCarthy as ‘as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task’. (footnote 1)
A common AI approach that proliferated in the 1980s and 1990s and is still used today was the creation of *expert systems*, computer systems that emulate the decision-making abilities of human experts (such as lawyers) by reasoning through bodies of experience-based knowledge, represented principally as a large collection of if–then rules rather than conventional procedural code.
Another increasingly-popular AI technique is *machine learning*, whereby instead of instructing computers how to accomplish tasks via rules or set procedures, programmers feed a large amount of data into a computer program, which it then uses to ‘learn’ how to carry out a specific task for itself, such as understanding speech or describing the contents of an image.
AI and machine learning have both been around since the mid-20th century: the realization that computers can be taught to learn for themselves has been credited to Arthur Samuel, an IBM programmer who demonstrated in 1959 that that company’s first commercial computer, the IBM 701, could learn to play checkers.(footnote 2)
However, early AI applications were limited in their practical ability to deliver ‘smart’ (and thus useful) business applications by a relative lack of computing power, and by lack of access to large amounts of digital information being created, stored, and made available for ‘training’ of these applications. The emergence of the Internet and ‘Big Data’ in the 1990s-2000s was a major driver of the current boom in development of AI applications: since 2000, there has been a 14X increase in the number of active AI startups, while investment into such startups by venture capitalists has increased 6X.
One machine-learning method that has gained popularity in the past 10 years is *deep learning*, which is based on an older concept from the 1940’s of trying to model how a brain ‘thinks’ using neurons. This led to the creation of *neural networks*, which enable applications to learn to deliver ‘intelligent’ results by artificially simulating the structure of the brain.
To read more about how these models apply to the world of trademark clearance and monitoring, download our white paper “Artificial Intelligence in Trademark Software: A Primer”
1 Nick Heath, ‘What is AI? Everything you need to know about Artificial Intelligence’, ZDNet, 12 February 2018
2 See e.g., Wikipedia, https://en.wikipedia.org/wiki/Arthur_Samuel