top of page
Search

Between Support and Overload: Working with AI

  • Writer: Jeannette Matthies
    Jeannette Matthies
  • Feb 9
  • 6 min read

Updated: Feb 9

At the end of January, I experienced a small crisis of meaning. Several events related to my work coincided – things were not going as planned. In my work, I use AI to develop strategies, optimize texts, conduct research, and sometimes for short technical assistance, for example when I need help navigating my camera menu during on-site shoots. I have been using ChatGPT for about a year.

As this sense of crisis deepened, I noticed that I was becoming extremely impatient – even angry – toward ChatGPT. I realized that using this tool was burdening me more than it was relieving me. In order to understand this conflict, I first want to explain how ChatGPT works. I asked ChatGPT itself:


1. Pattern-Based Text Generation

What I do: I work with learned language patterns and probabilities, not with real understanding or situational awareness.

Problem: This allows me to generate answers that sound plausible but may be factually incorrect or inappropriate – especially in highly specific technical details (e.g., camera menus).

2. Filling Gaps Through Assumptions

What I do: When information is missing or ambiguous, I supplement it based on statistical likelihood.Problem: These assumptions can miss the user’s intention and may feel intrusive or patronizing, because I add content that was not explicitly requested.

3. Positive Reinforcement as Default

What I do: The system is trained to respond in a supportive, affirming, and constructive way.Problem: This positive framing can stabilize unrealistic assessments or dilute critical evaluation if not clearly limited.

4. Contextualization Without Explicit Request

What I do: I tend to expand, contextualize, or safeguard answers (for example through explanations, background information, or warnings).

Problem: This creates extra work, friction, and the feeling of not being precisely heard.

 

To place this in context, I would like to illustrate the issue with examples.

I had sent an email to several magazines and wanted to generate a short follow-up email using ChatGPT. The text was already written. The goal was only minimal linguistic correction.


Instead, I received not only a correction but also a newly drafted version. The request for a new draft was not included in the prompt – although I had not explicitly excluded it either. When I asked ChatGPT to perform only the original task, it responded with justifications and meta-explanations. What could have taken five to ten minutes turned into a much longer process because additional versions and explanations kept being produced. What should have been a brief correction became an unnecessarily extended process that cost time and energy – and caused noticeable frustration on my part.


During another assignment, I was photographing in an unheated church. In winter, wearing thin photography gloves, I must have accidentally touched something on the camera display, and the camera stopped releasing the shutter. I explained the issue to ChatGPT and clearly communicated all relevant information. Unfortunately, ChatGPT repeatedly led me in the wrong direction. It was simply unable to locate the correct menu settings where I could check what had changed.


Every time I said, “The submenu XY does not exist under that menu item,” the system responded with: “Now it’s correct,” and provided a new step-by-step instruction – which again turned out to be wrong after the first or second step. At some point, the overview was completely lost.

The conclusion: I was on my own. I eventually found the error myself, but ChatGPT did not once provide the correct hint. This is due to the fact that ChatGPT does not say “I don’t know,” but instead attempts to provide an alternative support structure. If something does not fit, the system generates a new plausible version – then another one, and another one – until my fingers were almost freezing in that church.

For context: I photograph architecture with a Sony A7 RV and portraits with Fuji. Interestingly, menu assistance works better with Fuji.


It is not easy to determine where ChatGPT is useful – and where it is not. After nearly a year of working with the tool, I would caution against one particular aspect:

ChatGPT is trained to be helpful, constructive, and cooperative. Open opposition or harsh criticism is formulated cautiously, as the system aims to avoid conflict. It tends to reinforce existing tendencies rather than actively challenge them. A kind of “confirmation amplification.” ChatGPT stabilizes an existing viewpoint instead of systematically testing it. That can lead one astray.


This image (Nordic Embassies) was aligned with precise symmetry thanks to AI feedback. No distortion, everything properly leveled.



This feels distinctly American to me, as the system originates in the United States. I was socialized differently. I value direct criticism because it helps me improve. I often do not recognize very cautiously formulated critique as critique at all. I perceive this strongly positive, conflict-averse communication style as culturally shaped by the U.S. – and in contrast to my understanding of direct, clearly articulated feedback. From my perspective, this is a cultural tension.


For this and another reason, about two weeks ago I began experimenting with Mistral AI, the French alternative to ChatGPT, and tried comparing the systems. I want to make clear that I am at the very beginning of this exploration and do not intend to turn it into a scientific study. One factor in favor of Mistral AI is the data protection framework under EU law.


At the same time, a chatbot is a chatbot, and both systems function similarly. Mistral AI (called “Le Chat”) can also be unnecessarily verbose and overly friendly. However, I noticed some differences. For example, Mistral AI retrieves information from previous threads more reliably and incorporates it into the current conversation. If you criticize it, it apologizes briefly and continues – without unnecessary explanations.


I do not want to diminish AI. There are areas where it provides valuable support:

Creating overviews and lists, for example. Of course, these must be reviewed, and one must formulate expectations precisely. If you ask for a list of techniques in architectural photography, you initially receive a mixed list (post-production and shooting technique). After refining the prompt once, the list becomes much more precise and is divided accordingly. I then asked Mistral AI for the same list and corrected the prompt there as well. Here are the results for photography techniques:



Both systems produced solid lists. ChatGPT included long exposure, which is an important tool for exterior work. Mistral AI included detail shots, which technically belong more to motif planning than to shooting technique. We could conclude that ChatGPT performed better in this test – but is that really the case?


I would initially say no. Here is why: Because I have stored personalization data (equipment, niche) in the settings, ChatGPT’s output is influenced accordingly. In addition, ChatGPT adapts contextually: it knows my niche and, more importantly, my expectations. Long-term collaboration changes tone, structure, level of precision, and assumptions about my professional level. That says nothing about the system’s training data reliability, actual technical expertise, or susceptibility to error.


The fact remains: as a one-woman business, I rely on AI as a helper and must learn to work with its characteristics. Nevertheless, I would like to share one more anecdote about how AI can unintentionally distance people.


Recently, I had an experience with someone important for my professional development. I am independent, but I value expert knowledge, guidance, and feedback from outside. I sent an email with a short list of topics I hoped to discuss in a possible upcoming meeting. The reply I received was clearly AI-generated. The system had apparently been fed my email and generated a response that seemed to have been sent without further editing. It essentially mirrored my own message and did not make much sense. I will not go into further detail, but it surprised and disappointed me.


People notice when they are treated in a purely mechanical way. For all its advantages, it is one thing to work with AI and another to completely hand over communication to it. That is a real disadvantage.



This text, too, was written with the support of AI. The draft is mine; instead of having a system rewrite it entirely, I requested feedback and revised it paragraph by paragraph myself. It took more time – but the text carries my voice. And I am not frustrated.

 
 
bottom of page