As someone passionate about the transformative power of technology, it’s fascinating to watch intelligent computing – in all its forms – bridging the gap between fantasy and reality. Organizations around the world are establishing where and how these advances can add value and bring them closer to their goals. The excitement is palpable.
However, it is important that this excitement does not blind us to the dangers, propelling us forward without having taken the right preparatory steps or understanding the challenges that will be encountered along the way.
Preparing for a future powered by artificial intelligence (AI), a future where we can enjoy the clear benefits that technology brings while mitigating the risks, requires more than one article. This first article focuses on data as the “cornerstone” of AI-enabled initiatives.
Establish a data foundation
The move away from “software 1.0” where applications were based on hard-coded rules has begun and the era of “software 2.0” is upon us. Software development, once reserved for human programmers, is now increasingly the byproduct of carefully selected data ingested and analyzed by machine learning (ML) systems in a recurring cycle. In this new era, the role of humans in the development process is also changing as they move from software programmers to “data producers” and “data curators” – responsible for ensuring data quality.
It would be a simple task were it not for the fact that in the digital age there has been an explosion of data – collected and stored everywhere – much of which is poorly governed, misunderstood and irrelevant. Data lakes arose at a time when organizations were preoccupied with “infrastructure-first transformation” initiatives. And, while it can be helpful to digitize business processes, offload siled multi-generational IT, and drive cloud-first mandates, it will only get you so far on the transformation continuum.
Forward-thinking transformation leaders realized that there needed to be a greater emphasis on “data-centric value creation” and made it the overriding organizing principle in their organizations. “Data first” as the basis for technology and other critical investment decisions can:
Stimulate new operating models that help them differentiate and grow
Create “hyper personalized” digital moments and experiences that build loyalty
Improve foresight and expand predictive capabilities
These leaders are doing this not just to help them fully embrace the digital “now,” but to prepare for and capitalize on the AI-powered digital “next.”
Expose the blind spot
There is no doubt that the next wave of technology, driven by increased automation and computer intelligence, will rely on data more than any previous era. To take full advantage of these advances, data must be:
Well understood and well organized
Continually analyzed for relevance and cleaned
Ideally located where it can add the most value and be easily and economically accessible
Carefully selected to generate optimal business results
Tightly governed and regulated so that it is compliant and ethically sound
To neglect or downplay any of these considerations is to potentially build your AI future on pillars of sand.
There is evidence to suggest that there is a blind spot when it comes to data in the context of AI. Many organizations focus too much on fine-tuning their computational models in their quest for “quick wins.” However, contrary to popular belief, AI success lies not in tweaking and recalibrating models, but in tweaking data, constantly.
Once built, computer models should remain relatively static. Most industry experts believe that the availability, quality, and understanding of data are the primary determinants of AI success. Without them, an organization’s AI exploits carry significant risk, especially due to the triple threat of data bias, mislabeling, and poor selection.
Despite sounding about it from leading thinkers such as Andrew Ng, the AI community remains largely unaware of the important data management capabilities, practices and, most importantly, tools that ensure development success. and deployment of AI.
Try the challenge
Data-centric AI is evolving and should include relevant data management disciplines, techniques, and skills, such as data quality, data integration, and data governance, which are core capabilities for AI scaling. Moreover, data management activities do not stop once the AI model has been developed. To support this and enable malleability in the way data is managed, HPE has launched a new initiative called Dataspaces, a powerful cloud-agnostic digital services platform aimed at giving more control to data producers and curators as they build intelligent systems.
Addressing the data gravity and compliance considerations that exist for critical datasets head-on, Data Spaces provide data producers and consumers with frictionless access to the data they need, when they need it, supporting better integration, discovery and access, improved collaboration and better governance to boot.
This means organizations can finally take advantage of an ecosystem of AI-centric data management tools that combine both traditional and new capabilities to prepare the business for success in the age of intelligence. decision-making. Novartis is a prime example.
Recommendations for Data and AI Leaders
In summary, to ensure AI programs are successful from the start, organizations should take the following data-related steps:
Formalize both “data-centric AI” and “AI-centric data” as part of the data management strategy with metadata and data structure as key foundational components.
Set policy safeguards that include mandatory minimums for “data adequacy” for AI to protect against bias, mislabeling, or irrelevance.
Define the appropriate formats, tools, and metrics for AI-centric data as early as possible, avoiding the need to reconcile multiple data approaches as AI evolves.
Seek the diversity of data, algorithms and people within the AI supply chain to ensure value is realized and ethical approaches are adopted.
Establish roles and responsibilities for managing data in support of AI, leveraging AI engineering and data management expertise (internal and external) and approaches to support continuous deployment and production uses of AI.
The next article will focus on how to increase the transparency and “explainability” of AI systems to effectively eliminate bias in data or computational models – thereby reducing the risk inherent in the process.
To learn more, visit HPE.
About Andrew P. Ayres MBA
Following a successful career at Gartner and Micro Focus, Andrew is now a senior specialist in HPE’s UK enterprise services practice, focusing on the financial services and insurance industry. As an expert in digital transformation, data-centric modernization, cloud computing and artificial intelligence, Andrew helps bring together the best of HPE’s capabilities to ensure customers are ready for the future. and to meet the ever-changing needs of their customers.
Andrew holds an MBA from Manchester Business School and is currently a PhD candidate at Manchester Metropolitan University. His thesis focuses on how banks can govern against the risks posed by artificial intelligence in the context of their high-frequency trading operations.