In today's world, where the consumer is king, excellent customer experience is imperative for the success of your business. To achieve this, your data cannot be fragmented, redundant, obsolete, or inaccessible. Most organizations are currently dealing with more information than they can handle. This can be expensive as resources on storing, protecting, and securing information are costly. It’s important to understand what data you have, why you have it, and why you need it - it's then that your data can be leveraged as an asset. [FREE Webinar On-Demand: Learn more from this Case Study on Leveraging Data to Transform Customer Experience]
Data Privacy Day takes place annually on January 28th in recognition of the January 28, 1981 signing of Convention 108, the first legally binding international treaty concerning privacy and data protection. This day, led officially by The National Cyber Security Alliance (NCSA), is an international effort to “create awareness about the importance of respecting privacy, safeguarding data, and enabling trust”.
The venerable template allows structured form data to be accurately extracted. In the document capture industry, the concept of templates where you specify the location of each data element is a tried-and-true strategy for structured forms. If the form is standardized, giving the software the precise place to look for data will almost always result in better performance over alternatives such as rules-based approaches using keywords or patterns. Even with unstructured data such as on invoices, we find that many organizations have opted for a template approach after finding that more flexible, rules-based approaches fall short. The result is a tremendous amount of upfront effort and a lot of maintenance.
Having your cake and eating it, too, is a proverb that’s almost 500 years old, which means you cannot have two incompatible things at the same time. So many examples of situations exist where you face two mutually exclusive options. Let’s take document capture. Document capture software is designed to automate document-oriented tasks such as sorting or extracting key data. In order to achieve that automation, you must spend time to configure the software to identify documents and reliably locate and extract that data with a high enough degree of accuracy that your staff need only be involved in verifying a small percentage of it.
In 1989, I took my first decision sciences course and started coding in SAS at the age of 20. I greatly enjoyed pulling discoveries buried within mounds of data, although and even small datasets had many discoveries back then. At the root of every model I’ve built, even the simplest, was a solid understanding and foundational rigor of statistical theory. When computing simple statistics or developing descriptive models, I thought through the math behind the model and how this would impact the formation, application, and interpretation. It was about 30 years ago when I started my decision sciences journey, and I’m still applying techniques and building models to empirically solve problems, answer questions, overcome challenges that improve, reduce error, or otherwise benefit a situation. Over the past three decades, I’ve noticed trends and shifts, an evolution of sorts, in the foundational underpinnings of development and application within this interesting profession. I’ve come to the following conclusions that illustrate the evolution of the data science function over the past few decades:
“Digital Transformation is a game-changer,” and “leaders embrace digital transformation”—but is it really a game-changer, and have we actually embraced it?