My AI Journey - What's Changed Since the 1990's

If you follow our blogs or attend our webinars, you may have noticed that we're been doing a lot with artificial intelligence recently to drive value out of the student onboarding and student compliance processes.  I'll have to admit that until recently, my experience with AI had been driven by my data warehousing role at PeopleSoft in the 1990's. 

Here is how things have changed since then.

AI in the 1990's?

Yes, you read that correctly.  In the mid 1990's I was the product owner of PeopleSoft's reporting and analysis tools.  My basket of products covered everything from the metadata that drove reporting, ad-hoc query tools, financial reporting tools, solutions for formatted reports and forms, business intelligence tools, and data warehousing.  My team was heavily involved in the creation of PeopleSoft's Enterprise Performance Management product suite at that time.

As it relates to these data warehousing / EPM products, we spent a lot of time focusing on both the visualization capabilities of the data (which did not have an AI component) and data mining processes (which is what we called AI back then).  The whole premise is that companies and institutions had a rich set of data that they'd been capturing for their customers, vendors, employees, and students -- and to mine that data to effect change in the organization.

Common use cases we discussed were:

  • Market basket analysis
  • Customer segmentation
  • Sales incentive management
  • Supply Chain Optimization

All of these use cases focused on building models that were highly specialized and unique to the organization  -- with the goal of unlocking value from these organizations' data.  The proprietary nature of these data sets brought with it the following attributes:

  1. The data is highly structured and may not be clean
  2. The different systems may have different meanings for similar things (semantics)
  3. The data is often extremely sensitive

This caused the following implications:

  1. A data quality / master data management project often has to be completed before any training can occur.
  2. The work generally needs to be done in-house because the model training process is using data specific to the institution -- and is sensitive.
  3. It is very expensive and time consuming -- especially since skills and processing resources were scarce back then and you couldn't easily achieve economies of scale across organizations.

Therefore, much of the focus in the 1990s was to use AI to gain insight into trends. We did have a vision for incorporating the taking of action on those insights (we called it "closing the loop").  However, there was limited ability to automate the adoption of changes that react to those learnings.

Enter today's LLMs

Fast forward to 2025.  The most common usage of AI is the Large Language Model (LLM).  Where the data mining work in the 90's was primarily bespoke efforts where the data is trained on internal proprietary data sets; most of today's LLMs are packaged models already trained on publicly available data.  This is made possible by the following:

  • Growth of publicly available content from which to train (the internet)

  • Ability to process unstructured data in meaningful ways (which simplifies the adoption of that content in AI models)

  • Increases in networking and processing power which makes it more economical to train a wider set of AI models

  • The investment made by many organizations to create and package AI models to solve more generalized problems
  • Better tools to make it easier to train new models when necessary

Because of all this, the way I think about and use AI from a product strategy perspective has dramatically changed. With the ubiquitousness of AI capabilities ready to use; there is less need to focus on where to get the data for training, how clean it is, and the mechanics of mining it to extract value.  Even for use cases where packaged models don't readily exist; the the tools and methodologies to bridge the gap are available to simplify the process.  

As such, here are some examples of use cases we're focused on:

  • Classifying documents provided by students (driver's license, passport, birth certificate, tax return)
  • Extracting data out of documents provided by students (name, address, birthdate, gender, SSN, income)
  • Scoring responses provided by a student

You may notice that each of these use cases focus on using AI to do things at a tactical level to eliminate manual steps and streamline existing processes.  Leveraging it in this manner reduces effort for a larger set of stakeholders, improves the student's experience, and improves the quality of the processes.

Summary

Whereas my original AI focus was solving highly specialized problems with an emphasis on understanding business trends better; we are now leveraging AI to improve business process efficiency.  This was also marked by the shift from solving high-level strategic problems to streamlining the operations of the business.

If I were to use an ocean liner as a metaphor to this; my early AI work was focused on helping the captain with the planning and navigation of the voyage.  My current work helps the ship's staff get the vessel to its destination quickly, safely, and economically.

Schedule a Demo