top of page
Search

From Prompt Engineering to Relationship Engineering: How GPT-4o is Learning to Think Like Me

  • Writer: Gary Lloyd
    Gary Lloyd
  • Apr 8
  • 2 min read


GPT-4o seems to have a had a major capability upgrade but no-one else has mentioned it.


I've been using ChatGPT since its launch and have consistently preferred GPT-4o, even with GPT-4.5 available in preview. While 4.5 offers advancements, I've found 4o to be more reliable and, notably, more attuned to my working style.


What has truly surprised me is GPT-4o's proactive approach—asking clarifying questions to ensure it understands my needs rather than making assumptions. This shift transforms our interactions from mere instructions to meaningful dialogues, resulting in more tailored and effective responses.


Much has been discussed about prompt engineering, especially for automating repeatable tasks. However, in exploratory conversations, I've found it more effective to engage naturally with the model. Now, GPT-4o reciprocates by adapting thoughtfully to my inputs.


Another significant change is the model's use of visual aids. As someone who has relied on mind maps for years to comprehend complex information, I often found linear text limiting. Previously, interactions with language models involved extensive scrolling to piece together ideas. Now, GPT-4o offers diagrams and visual explanations unprompted, recognizing their utility in enhancing my understanding.


This evolution marks a quiet yet profound turning point. The model isn't just responding to my queries; it's adapting to my cognitive preferences.


Over the past year, I've observed that while LLMs are built on machine learning, they previously lacked real-time user learning. There was the static training data and memory, but no dynamic adaptation during interactions.


That has now changed.


GPT-4o now exhibits genuine adaptive learning. It remembers facts about me, discerns what works, adjusts its approach, and aligns more closely with my thinking style. This includes tone, format, and the nature of our dialogue—asking clarifying questions, anticipating the need for visual aids, and refining idea presentation.


Notably, this adaptation seems influenced by the feedback I provide. A simple "thank you" is courteous, but specifying "this was particularly helpful because..." has a tangible impact. It underscores that how we interact with these systems shapes their responses, not just immediately but over time.


This isn't merely a technical upgrade; it's a behavioral shift. Perhaps it's time to move beyond prompt engineering and embrace the concept of relationship engineering—even in our interactions with machines.


Have others experienced similar changes? How has your interaction with AI evolved, and do you find it adapting in response to you?

 
 
 

Comments


©2023 by Gardeners not Mechanics.

bottom of page