Trademark professionals are rightly concerned about data quality and its impact on their work product. Data quality - accuracy, validity, completeness, consistency and timeliness - is an essential part of all major trademark management operations, including brand clearance, brand protection, docketing, licensing, data migration, M&A, and due diligence. If the trademark data used in these operations contains errors - e.g., missed or delayed official updates, gaps in records or individual data fields, incorrect or outdated owner or renewal date information - then organizations can be exposed to significant costs and risks.
Those legal departments and service providers that take measures to improve the quality of the data they use - and that select vendors and business partners that do the same - can expect to make fewer mistakes, derive better legal opinions and decisions, and ultimately achieve better brand protection, higher profits, stronger client relationships, and other benefits.
However, taking such improvement measures typically isn’t easy, or cheap. Maintaining good data quality has long been, and continues to be, an industry-wide challenge in the trademarks space - from the smallest single practices all the way up to the government Patent & Trademark Offices (PTOs) themselves.
For most of the 20th Century, the lion’s share of “trademark data” (the contents of trademark applications, logo and marketing imagery, product specimens, search and examination reports, opposition filings and decision recordals, registration certificates, gazette publications, fee payments and receipts, renewal applications and approvals, etc., etc.) were generated manually - by human beings entering information into paper forms. And while most of the human beings were careful to fill out those forms properly, they were prone to human errors - misspellings, wrong registration IDs, incorrect dates - and misinterpretations from bad handwriting.
But since the 1970s, the trademarks industry has been going through a gradual transformation from paper to digital environments - with the forms becoming electronic, and old paper files being transferred (often manually) to digital databases.
This transformation has been slow and difficult, requiring significant capital expenditures, data-migration and technology expertise, and a commitment to quality-improvement that many organizations simply have not possessed or could not afford. Moreover, the digitization process itself often led to still more errors along the way, as written data was keyed-in.
As a result, some trademark-industry organizations - including different country PTOs - are today much further along the digital path than others. And data quality at these different organizations is highly variable.
Meanwhile, continued global growth in the number of brands and trademarks, combined with declining private-sector legal department budgets and corresponding rising demand for more-efficient solutions, have only made this data quality problem more acute: trademark information is being generated at increasingly faster rates, while larger volumes of data are inherently more difficult to manage.
So improving and maintaining trademark data quality is a complex, multi-faceted and challenging task. That said, there are several concrete things that trademark managers can help their organizations to do in order to improve the overall quality of the information they create, update, access, and utilize in their work.
1. Circle the wagons
Is improving and maintaining your organization’s trademark data worth the effort, and expense? As a trademark manager, you may believe so - but in order to make such improvements actually happen, they need to become an organizational priority. This, in turn, means convincing key stakeholders (aka, key executives or partners) to get behind them. Which usually means building a business case - and securing a budget.
Easier said than done. Trademark data improvement is often a classic case of “clear costs but unclear benefits”: the downside is typically very concrete (system costs, salary costs, consultant costs), but the upside - or perhaps more accurately, the lack of future downside - is often maddeningly difficult to quantify and “prove”. For a data-improvement business case to be taken seriously, it needs to be presented realistically and in the language of the business - speaking to the critical and specific business priorities of the key stakeholders involved. What could go right, if the data is improved and properly maintained? What could go wrong, if it isn’t? What are the likelihoods of those possibilities actually occurring, and what could be the impact on revenue and profits? Understanding and communicating these kinds of scenarios - even at an order-of-magnitude level - may enable you to secure senior-level support for your business case, but also help to identify and engage the right level of senior business sponsorship.
Finally, your data improvement goals must include realistic expectations on the degree to which data quality excellence that can be achieved and maintained. Your incremental costs (in time and dollars) to achieve greater quality levels must be balanced against the benefits of that quality. Quality is a level of excellence: it is a degree, not an absolute, and it is unlikely that your data will ever be perfect. There are always “acceptable” data error rates and they need to be acknowledged, by evaluating the data within its context of use (e.g., see Haug, A., Zachariassen, F., & van Liempd, D. (2011). The cost of poor data quality. Journal of Industrial Engineering and Management, 4(2), 168-193. avaliable here ). Once acceptable trademark data quality is achieved, ongoing monitoring must be maintained to ensure that the quality doesn’t degrade because of something in the environment, such as new processes, system changes, different personnel, etc.
2. Stay the course
Improving trademark data quality is a marathon, not a sprint. It requires a long-term commitment that is organization wide - and durable enough that it can survive the departures of particular individuals or periods of reorganization.
Furthermore, senior leadership buy-in (and budgeting) is only the tip of the data quality iceberg. In order to succeed, achieving data quality standards needs to become part of the organization’s DNA - the job of everybody in the organization who touches it - and even of many who don’t. Data quality needs to be accepted as an asset of an IP organization, part of everyone’s job description, and (in many cases) a parameter of performance evaluations and incentive packages.
To help make this a reality, specific employees should probably be assigned “data stewardship” (or “governance”) responsibilities, including establishing data quality policies, parameters and standards. Do the data fields contain the types of data they are supposed to - and are they properly labeled and documented so anyone wishing to use the data can quickly understand what they are dealing with? What metrics are needed to track improvements, and how can these best be measured?
Another key consideration over time is system architecture - are your database systems (and reporting capabilities) complete enough, and do they talk to each other in ways that ensure quality is maintained? A good docketing technology, in particular, can go a long way to helping IP professionals consolidate data, and manage the deadlines and documents associated with the trademark application and renewals processes.
3. Think globally, act locally
For trademark managers used to dealing with one country’s way of doing things, expanding one’s brand coverage into new territories can be a daunting experience from a data perspective, with a significant local learning curve. International trademark practitioners have long recognized that when applying for and managing trademarks across different jurisdictions, there is a striking variety across these different regions in terms of trademark processes and methods, forms and required data fields, and ways of storing and representing trademark information.
For example, different countries can disagree not only in terms of rules around trademark renewal expirations and “grace periods”, but also the rights of trademark holders with respect to expired or “dead” trademarks to begin with - e.g., they have completely different scenarios under which so-called “zombie” trademarks can rise again from the grave to claim their past rights.
In order to account for such differences, you will likely need to adapt the ways in which your own systems capture and store data from PTOs. According to an old adage, when you integrate IT systems, you inherit their design decisions - driven by their divergent goals, differing processes, organizations or cultures, and obsolete technologies. This is no different with trademark registry data. Integration becomes a question of how do you consolidate the differences.
This consolidation is also (and perhaps, especially) an issue for trademark search and watch service providers. At TrademarkNow, for example, we have spent a considerable amount of energy unifying these kinds of differing standards - e.g., divergent trademark statuses, product and image class systems, expirations and grace period rules, to say nothing of translating different languages - in order that search results appear in a consistent and universally-usable form. Keeping up with all the global changes is a constant challenge that we must take on every day.
4. Don’t assume - cross-reference and sync
Identifying errors in your trademark data can be difficult because it’s often unclear what “correct data” is. For instance, if the information in your docketing system - such as renewal deadlines and trademark owners - were entered years ago, it may be hard to know if the information is still up-to-date, or if it was even correct to begin with.
This problem can be compounded in cases of M&A and due diligence, when you must validate similar information in other organizations’ dockets - where you’ve had no control over the level of data quality (if any).
One of the best ways to get around this problem is to do cross-referencing - synchronizing your docket database with that of the different PTO authorities which might hold the same renewal dates and other information, in order to flag any differences that might represent errors. Some docketing system providers (such as WebTMS) already provide services that will allow you to perform such synchronizations in real time, with the latest information.
5. With vendors, trust but verify
One relatively easy way to improve your trademark data quality, of course, is to work with vendors who themselves have good data quality. And like the PTOs themselves, different trademark search and watch vendors have greatly differing levels of data quality. Trademark data quality is expensive to achieve and maintain, not only for legal practitioners, but for trademark search vendors - and becomes exponentially more so as a search provider’s geographic coverage expands. In order to provide good search results, vendors must both gain access to PTO “baseline” data and keep up with each PTO’s data updates - a difficult business, given the differing systems, processes and technologies involved.
For example, in some cases PTOs publish official trademark updates to outside parties via modern, all-digital processes, in a structured format (XML feeds and software application program interfaces, or APIs), making data updates from these Offices both seamless and instantaneous. At the other extreme, many countries continue to rely on manual, paper-based systems and publish updates in the form of physical documents, which must be digitized by service providers manually. In-between these extremes are countries whose trademark registries publish their updates via electronic documents (PDFs) and/or on website search tools. And at the same time, country PTOs have a wide variance in their own frequency of updates, the number of different data fields they provide, and in the strength of their own (internal) data quality controls.
Likewise, the same can be said of the search vendors themselves: each vendor has its own data quality standards, different methods and levels of rigor in providing updates and identifying and fixing errors, and different ways of interpreting and representing data across different jurisdictions. Many “free” and low-cost trademark search vendors, for example, do not include “dead” or inactive trademarks in their results at all - and thus may miss relevant information that may be costly to a business owner (in cases where “zombie” marks “come back from the dead”).
Given all this complexity, what, as a trademark manager, can you do to keep your search providers honest about their quality? And how can you compare them?
First, you can insist that your vendors be transparent about their data standards and updates. For example, there’s no excuse for not providing “last update” dates for each country and data source, as well as accurate “baseline counts” of the number of trademarks listed from each source, which can be compared vendor to vendor in real time. Over time (during free software trials, for example), you should be able get a sense as to which vendors are doing a better job of keeping up with PTO updates in a timely fashion.
Second, you can also ask your vendor about its quality control processes, and insist that they go into some detail about how robust they are. For example, when your vendor updates its data from “digital” PTOs that provide their updates electronically, what kinds of data processing QA checks do your vendors run? Do they use statistical analyses for monitoring errors and inconsistencies, and what’s done in cases where they do notice errors or anomalies? If they overwrite relevant existing marks/contents in their database with the new updates, do they retain and keep track of the file history, in order to display / review past events for their users?
For updating paper-based PTO gazettes, you can ask vendors how they digitize the paper files, and what kind of error rates they achieve as a result. How do they validate the completeness of each dataset and ensure that data are inputted in the correct order?
For example, TrademarkNow applies an extremely robust quality-assurance (QA) regimen to all of our incoming trademark data before it is added to our system: in our process, every digitized trademark record is validated via double keying by two separate operators (two different entries are recorded, which are then compared for variances) - securing 99.9997% correctness of the related data fields.
Finally, you can ask vendors about their processes for regularly cleaning, unifying and repairing the PTO data itself. Given that PTOs themselves often make mistakes, how does your search vendor deal with these? For example, do they fix misspellings of trademark owner names, combine brands that belong together under the same owner, etc.? In TrademarkNow’s case, when we find such errors, we either solve the issues algorithmically or manually by our data-quality team, or in some cases, we will contact the PTO and request fixes from them. In all cases, our policy is always to display BOTH the original “official” PTO record (with the mistake), and our own note and suggested data correction.
Towards a better future
If all of the above seems daunting, don’t despair - you are not alone in being concerned about data quality. A long-running series of industry experts, including Gartner Group, Price Waterhouse Coopers and The Data Warehousing Institute have claimed to identify a crisis in data quality management and a reluctance among senior decision-makers to do enough about it.
And at the end of the day, high-quality trademark data is truly fundamental to the success of trademark professionals and their organizations - especially at a time when the pressure to deliver trademark legal services more efficiently and at a lower cost is clear. If you’re a trademark manager, a high level of data quality really is in the best interest of your organization - and it needs to be a focus. By using some of the best-practices listed above, you can help guide your organization to a better place.