artificial intelligence

AI / ML in enterprises: Technology Platform

As an organization embarks on leveraging AI / ML at enterprise scale, it is important to establish a flexible technology platform that caters well to different needs of data scientists and the engineering teams supporting them. Technology platform here includes hardware architecture and software framework that allows ML algorithms to run at scale.

Before getting into software stack directly used by data scientists, lets understand the hardware and software components required to enable machine learning.

  • Hardware layer: x86 based servers (typically intel) with acceleration using GPUs (typically nvidia)
  • Operating Systems: Linux (typically redhat)
  • Enterprise Data Lake (EDL): Hadoop based repository like Cloudera or MapR, along with supporting stacks for data processing:
    • Batch ingestion & processing: example – Apache Spark
    • Stream ingestion & processing: example – Apache Spark Streaming
    • Serving: example – Apache Drill
    • Search & browsing: example – Splunk

Once necessary hardware and data platforms setup, the focus is on providing an effective end user computing experience to data scientists:

  • Notebook framework for data manipulation and visualization: like Jupyter Notebooks or Apache Zeppelin, which support most commonly used programming languages for ML like Python and R.
  • Data collection & visualization: like Elastic Stack and Tableau.
  • An integrated application and data-optimized platform like IBM Spectrum makes it simple for enterprises by addressing all the needs listed above (components include enterprise grid orchestrator along with a Notebook framework and Elastic Stack).
  • Machine Learning platforms: specialized platforms like DataRobot, H2O, etc simplifies ML development lifecycle and lets data scientists and engineering focus on creating business value.

There are numerous other popular platforms like Tensorflow, Anaconda, RStudio and evergreen ones like IBM SPSS, MATLAB. Given the number of options available, particularly open source ones, an attempt to create a comprehensive list will be difficult. My objective is to capture the high-level components required as part of Technology platform for an enterprise to get started with AI / ML development.

AI / ML in enterprises: Lifecycle & Departments

Many start-ups are based on AI / ML competence and require this expertise across the organization. In established enterprises, AI / ML is fast becoming pervasive across the organization given the disruption from start-ups and customer expectations. Depending on the size and level of regulation in their respective industries, machine learning activities might be embedded within existing technology teams or dedicated “horizontal” teams might be responsible for them.

ML activities that people readily recognize are the ones performed by data scientists, data engineers and the like. However, there are other business and technology teams that are essential to enable ML development. Given the potential bias and ethics implications with business decisions made by AI / ML, governance to ensure risk and regulatory compliance will be required too. In this blog, I will cover AI / ML lifecycle along with the functions and departments in an enterprise that are critical for successful ML adoption.

AI Model inventory: There is an increasing regulatory expectation that organizations should be aware of all AI / ML models used across the enterprise to effectively manage risks. This McKinsey article provides an overview of risk management expected in banking industry. As an organization embarks on creating AI / ML development process, a good starting point is to define what constitutes an AI model to ensure common understanding across the organization and create a comprehensive inventory.

Intake and prioritization: To avoid indiscriminate and inappropriate AI / ML development and use, it is important that any such development go through an intake process that evaluates risk, regulatory considerations and return on investment. It is a good practice to define certain org wide expectations and preferable to federate the responsibility for agility.

Data Management: Once an AI Model is approved for development, business and technology teams work together to identify required data, secure them from different data sources across the organization and convert them into feature set for model development.

  • Data Administrators manage various data sources, which typically are data lake (Apache Hadoop implementation) or warehouses (like Teradata) or RDBMS (like SQL Server / Oracle).
  • Data Engineers help with data preparation, wrangling, munging and feature engineering using a variety of tools (like Talend) and makes feature set available for model development.

Model Development: Data Scientists use AI / ML platforms (like Anaconda, H2O, jupyter) to develop AI models. While model development is federated in most enterprises, AI / ML governance requires them to adhere to defined risk and regulatory guidelines.

Model Validation: An Enterprise Risk team usually validates models before production use, particularly for ones that are external facing and deemed high risk.

Deployment & Monitoring: Technology team packages approved models with necessary controls and integrated into appropriate business systems and monitors for stability and resilience.

Enterprises strive to automate the entire lifecycle so that focus can be on adding business value effectively and efficiently. Open Source platforms like AirFlow, MLFlow and Kubeflow help automate orchestration and provide seamless end to end integration for all teams across AI / ML lifecycle.

AI / ML in enterprises: Challenges

Organizations need to keep up with the times for long term sustenance and with AI / ML becoming pervasive across business domains, every firm nowadays has teams trying to leverage machine learning algorithms to stay competitive.

In this blog, I will cover the top five challenges that they encounter after initial euphoria with proof of concepts (POC) and pilots.

  1. Lack of understanding: AI / ML has the potential to transform technology and business processes across the organization and create new revenue streams, mitigate risks or save costs. However, AI / ML is not a substitute for subject matter expertise. A discussion among novices will throw up a million possibilities and ML can appear to be an appropriate solution to all world problems. While machine learning is based on the ability of machine to learn by themselves, training the algorithms with appropriate data is an important aspect that can be done only by experts. To generate meaningful results, data scientists need to work in unison with business and technology professionals. Data scientists bring deep understanding of ML algorithms, business professionals identify meaningful features and data engineers help secure data from different sources that will eventually become feature set. As one can see, good understanding of ML across the organization is important to identify the right problems to solve and lack of it is the most important challenge that prevents enterprises from deriving benefit despite investment.
  2. Lack of IT infrastructure: As I had mentioned in my original ML post, machine learning came to prominence due to significant information technology advances in processing power and data storage. Enterprises can acquire the required compute power through cloud providers and many organizations also choose to build their own parallel processing infrastructure. The decision to leverage cloud vs. building internal infrastructure is based on a number of factors like regulations, scale and most importantly cost considerations. Either ways, without this investment, ML programs will not go too far. Some organizations invest in requisite hardware but fail to provide the software and database platforms required for data scientists and technologists to leverage this infrastructure. Most machine and deep learning platforms and tools used for development are open source. However, this open source cost advantage is offset by the numerous options available for ML development and there is no one size fits all solution. To summarize, the second challenge is to create powerful IT infrastructure required for ML development and deployment.
  3. Lack of Data: With good understanding and infrastructure, this challenge should be addressed but data is foundational for ML and I have listed this out separately to call out the nuances. Data should be available in sufficient quantity and with good quality for meaningful results. Data preparation is an important step – wrangling, munging, feature scaling, mean normalization, labeling and creating an appropriate feature set are essential disciplines. It is a challenge to identify problems that have requisite data at scale and prepare this data for machine learning algorithms to work on.
  4. Lack of talent: Going by the number of machine learning projects that fail to meet their purpose, the ability of existing teams across enterprises is questionable. Any technology is only as good as people working on them. A few technologies have managed to simplify the work expected from programmers (just drag and drop or configuration driven). However, machine learning still requires deep math skills and thorough understanding of algorithms. So, finding suitable talent is particularly difficult.
  5. Regulations & policies: In a diverse world with myriad regional nuances, decisions made by machines tend to undergo a lot more scrutiny than ones made by humans. Our societies are still in paranoia of machines taking over humans and governments all over the world have regulations that require proof of decisions made by machines to be fair and without bias. This challenge is made more complex by interpreters of regulations inside an enterprise who place unnecessary controls that might not address the regulation but impede ML development. So, it is important for policies to address regulatory concerns without derailing ML development.

Finally, enterprises are riddled with politics and it is possible to address all above challenges only when business, technology and other supporting functions work together seamlessly. Start-ups and technology organizations that are relatively new keep it simple and are more adept at solving these challenges. Large enterprises that have added layers of internal complexity over the years naturally find it more difficult to overcome differences and solve the same challenges.

AI / ML in enterprises: Relevance

Let’s explore two aspects that will provide insights into AI / ML relevance within technology and across enterprises. First, roles in technology that need to work on machine learning algorithms and second, areas within an enterprise that will benefit the most from AI / ML.

It is an incorrect assumption that all software engineers will work only on ML algorithms in future and demand for other skills will plummet. In fact, majority of current software engineering roles that do not require machine learning expertise will continue to exist in the future.

Software engineering functions that DO NOT require machine learning expertise: UI / UX development, interface / API development, rule based programming and several other client and server side components that requires structured or object oriented programming. In addition, there are others like database development and SDLC functions that are required for AI / ML technology lifecycle but don’t require deep machine learning knowledge. So, this leaves only data / feature engineering, data science and model deployment teams that absolutely require machine learning expertise. However, these are rapidly growing areas and demand for experts will continue to outpace many other areas.

Where can we leverage ML? Any use case where historical data can be used for making decisions but this data is so extensive that it is practically impossible for a human to comprehensively analyze the data and generate holistic insights will be a candidate for ML. The ML approach will be to leverage human subject matter expertise to source relevant data, determine the right data elements (features), select appropriate ML model and train the model to make predictions and propose decisions. A few examples:

  • Sales & Marketing: Use data around customer behavior and make recommendations. We see this all the time from Amazon, You Tube, Netflix and other technology platforms.
  • IT Operations: Use a variety of features to predict potential failures or outages and alert users / ops.
  • Customer Service: Chatbots that use natural language processing to answer user queries.
  • Intelligent Process Automation: Eliminate manual operations thereby optimizing labor costs and reducing operational risk.
  • Cyber Security: Detect malicious activity and stop attacks.
  • Anomaly detection: Every business domain needs to beware of anomalies and detecting them will reduce losses or accidents. It could be detecting defaults or money laundering or fraud for banks, detecting leak in a chemical plant, detecting a traffic violator, etc.

Every enterprise, large or small, is likely to have AI / ML opportunities that will result in bottom line benefits. In the next part, I will cover the typical challenges an enterprise faces during adoption.

AI / ML in enterprises: Hype vs. Reality

Having done my Machine Learning certification in August 2019, I was fortunate to get an opportunity soon after to build and lead technology team that worked on AI / ML problems across the enterprise.

During the team build-out phase, I realized that many software engineers have completed a formal certification on machine learning to qualify themselves for a role in this emerging technology area where demand is expected to increase. There is also an unfounded assumption that all software engineers will work only on ML algorithms in future and demand for other skills will plummet. The reality is that not all software applications will be suitable machine learning candidates. Moreover, developing machine learning algorithms is only part of AI / ML technology lifecycle. There be massive software engineering needs outside of machine learning, particularly around data and SDLC automation to enable AI / ML technology. Having said that, familiarity of machine learning concepts will increase effectiveness of software engineers as all applications in near-future will interface with ML modules for certain functions.

Now, let’s address another question – is AI / ML just hype? To understand this, lets look at it through the lens of Gartner Hype Cycle. Since mid 1990s, a number of technologies fell by the wayside after inflated expectations in the beginning. However, a few like cloud computing, APIs / web services and social software went through the hype cycle but the reality after mainstream adoption was quite close to initial expectations. Looking at hypes since 2013, several technologies related to AI / ML have been at the top every year. Starting with big data and content analytics, we have seen natural language processing, autonomous vehicles, virtual assistants, deep learning and deep neural networks emerge at the top during the last seven years. And results from machine learning algorithms have already become part of our day to day life – like recommendations made by Amazon, You Tube or Netflix and chatbots available through a number of channels.

So, I believe AI / ML is real and will continue to disrupt mainstream industries. However, it will be different from other familiar technology disruptions in many ways:

  • AI / ML technology will continue to evolve rapidly, driven by silicon valley innovation.
  • New specialized areas of expertise will emerge every year that will require deep math understanding.
  • Technology workforce will be under pressure as past work experience will be of limited value due to this fast evolution.
  • Traditional enterprises will struggle to keep pace.
  • Possibility of learning through data will undermine established business theories.

Finally, the overwhelmingly open source nature of this domain will lower entry barrier and promote start-ups to challenge established players. It will also give an opportunity for established organizations to adopt and manage this disruption. The choices made will determine whether an organization disappears like Blackberry, comes back with a bang like Microsoft or continue to hang-on like IBM. While this is primarily about embracing a relatively new technology domain, appropriate strategy around people and process will also be required to succeed. To summarize, organizations will have to create the right ecosystem and provide clarity on approach that encourages people to innovate.

In this blog series, I will articulate my thoughts around people, process and technology considerations while adopting AI / ML in a large enterprise:

  • Technology functions that will require machine learning expertise.
  • Business domains that will benefit from AI / ML.
  • Challenges that enterprises should be prepared to encounter.
  • Structure and governance to scale up adoption.
  • ML technology platform.