Who is pacing this race?

Image Credit: Wanda Tuerlinckx , Humanoids

Since the launch of Open AI and Co-pilot, we have been talking about these tools as “assistants” even though they are machines and not so much tools. Language takes time to settle, as it often does when we go through major paradigm shifts.

This “machine-coworker” is starting to take on very entry-level tasks. It augments entry-level knowledge worker jobs like call center representatives with chatbots, integrated voice response (IVR), or intelligent voice response—very early stages of customer service. It shows up in basic recommendation engines for content.

The idea of having a more freeform machine co-worker with whom we can ask natural language questions and who can anticipate our needs is just starting. However, this requires a higher level of digital fluency for us to really find a groove with these machine-coworkers and not be overtaken by them. As I came up in my career, we were taught to “automate ourselves out of a job” to do the next thing. Our skills, desire for change, interest in the business, and work dictated the pace. Today, the pace of automation may not be dictated by us. There is a real risk of jobs going away before we have thought through the next set of problems to solve. If we cannot see around corners, we have not considered our upcoming transition.

Add to this, there are still a lot of misunderstandings about how data works. As a function, IT is still a black box to many business functions, and people outside of data teams have little insight into the data supply chain and the true cost of data as it applies to addressing their business questions.

Many misconceptions exist about how machines can or cannot learn and what that means. Understanding how machines and machine-coworkers learn and framing it in more human terms makes it more accessible for people. We must do some overall digital fluency work to prepare people for that, but we must also ask if we are out over our skiis.

Who is setting the pace for change? Is it the economy’s voracious appetite for profit from business models dependent on hypecycles? Or is it us truly going after the problems that genuinely need to be solved—like education, recidivism, workforce retraining, healthcare, climate change, refugee management, immigration, homelessness—and the litany of world problems making their way to our front doors and kitchen tables.


 

The most important thing to remember is that no matter how much augmentation these machines enable and all the tremendous progress they will bring, there will continue to be limits. Humans have remarkable capabilities to deal with and adapt to change; there will not be an ‘end of human work.’

 

Common Misconceptions about how machines learn

MACHINES CAN LEARN:

  • PATTERN RECOGNITION: Machines can excel in recognizing patterns and trends within vast datasets, allowing them to make predictions or classifications based on learned patterns. With its ability to automatically learn patterns, make predictions, and uncover hidden insights, machine learning has opened up new avenues for scientific inquiry. (Mind The Graph)

  • SUPERVISED LEARNING: Machines are capable of supervised learning, where they can be trained on labeled data to perform specific tasks with accuracy. (MIT)

  • ADAPTATION TO NEW DATA: Machines can adapt to new information and updates by adjusting their algorithms, enabling continuous learning and improvement. Contrary to traditional approaches, where models are trained on a static dataset, deployed, and periodically re-trained, continuous learning models iteratively update their parameters to reflect new distributions in the data. (DataCamp)

  • AUTOMATION OF REPETITIVE TASKS: Machines can learn to automate repetitive tasks through algorithms, reducing the need for explicit programming and improving efficiency. “So far, most implementations of AI have resulted in some form of augmentation, not automation. Managers' surveys suggest that relatively few have automation-based job loss as the goal of their AI initiatives. So while I am sure there will be some marginal job loss, AI will free up workers to be more creative and do more unstructured work (Thomas H. Davenport).” (Pew Research)

MACHINES STRUGGLE TO LEARN:

  • CREATIVITY: Machines often struggle with tasks requiring creativity, as they cannot generate innovative ideas or solutions beyond their programmed algorithms. So far, generative AI seems to work best with human partners, and perhaps then, the synthetic creativity of AI is a catalyst to push our human creativity, augmenting human creativity rather than producing it. (World Economic Forum)

  • UNDERSTAND CONTEXT: Machines may have difficulty understanding context, making it challenging to interpret information accurately in situations involving nuanced or complex contextual understanding. There is still much work to do before AI truly understands language. Consider the question, “Who are my customers?” It presents a simple enough task: Create a list of customers. But what about the question, “Who are my best customers in the Pacific Northwest for a particular product?” Now we’ve added a layer of complexity that requires several integrated tasks to answer qualifying questions, such as: How is “best” defined? (Harvard Business Review)

  • EMOTIONAL INTELLIGENCE: Machines lack emotional intelligence and the ability to understand or respond to human emotions, limiting their effectiveness in tasks that require emotional understanding or empathy. Within the field of AI, there has been a growing interest in developing ‘empathic AI systems’. However, by and large, empathy is poorly formulated using proxy data such as facial expressions, voice signals, and gestures, without considering the multiple facets and subjective notions that empathy entails. (Science Direct)

  • COMMON SENSE REASONING: Machines often lack common sense reasoning, finding it challenging to make intuitive decisions or apply practical knowledge in situations where explicit data is unavailable or insufficient. Commonsense intelligence is a long-standing challenge in AI. Despite considerable advances in deep learning, AI systems remain narrow and brittle. (Dædalus)

Previous
Previous

Rethinking Remote work: again

Next
Next

machine, my coworker