Background
In an age where AI tools are increasingly prevalent, it is easy to be drawn into grand, attention-grabbing themes—questions like “How should we think about and use AI as it becomes more integrated into our lives?”
These broad questions capture our curiosity and encourage clicks, but what truly matters is not just the title, but how we think about and interact with AI in our everyday activities.
Based on personal experiences and common usage scenarios, here are some key insights to encourage a more thoughtful and effective approach to using AI’s capabilities.
Common Scenarios of AI Use in Daily Life and Work
Assisting in Software Development
Utilizing no-code or low-code approaches, aided by AI-driven prompt engineering, can accelerate software development tasks. For instance, I have used AI to develop several Obsidian plugins that help manage project, task and knowledge (PTKM), including
Additionally, I regularly use AI-powered tools to debug Python code for my research projects, streamlining the development process and saving valuable time.
These examples demonstrate AI’s ability to simplify complex workflows by automating repetitive tasks and accelerating technical problem-solving.
Voice-to-Text Transcription and Refinement
Drafting content by speaking aloud and then using AI tools to transcribe the audio into text can speed up the writing process. For example, the initial draft of this article began as a written outline with key ideas. I then used audio notes to elaborate on the content. AI-powered transcription tools converted the audio into text, which I later refined and polished using AI-based editing tools.
AI can address transcription errors, improve punctuation, structure sections, remove redundancy, and transform colloquial phrasing into clearer, more professional writing.
Similar processes can enhance emails or other forms of written communication.
Content Improvement and Translation
For non-native English speakers, AI can greatly assist in translating and refining English text. It can correct mistakes, enhance the structure, and improve logic and clarity, resulting in more professional and polished communication.
Enhanced Research and Problem-Solving
Tools like ChatGPT, Perplexity, and others provide efficient ways to search for answers, particularly in research contexts. For example, I have used AI to tackle complex scientific or technical questions—solving issues in a fraction of the time traditional manual research would take.
In one instance, I aimed to extract coordinate points from Google Earth. Initially, I considered manual clicking and copying of coordinates, which is time-consuming. AI suggested various coding approaches, referenced tools, and discussed feasibility. By examining these suggestions, their references, and their complexity, I realized that a fully automated solution might be overkill for just one or two target areas.
Ultimately, after verifying references and considering AI’s proposed methods, I reverted to a simpler, more manual approach—still aided by AI advice on formatting and data conversion.
Observing the Problems and Limitations of AI Usage
Over-Reliance on Summaries Without Thinking
AI can summarize lengthy articles into a few sentences, offering a rapid overview. While this seems efficient, it often fails to foster deep understanding. Without personally engaging with the details and context, these quick summaries fade from memory.
After reading an AI-generated summary, one might struggle to recall even the main points or the article’s title hours later. The superficial nature of these summaries makes it harder to connect the information to one’s life or work, leading to poor retention. As a result, learning does not truly occur.
Lack of Active Engagement
Simply asking an AI a question and passively reading its answer, without critical thought, iterative questioning or validation, leads to little learning. If we do not digest or verify the response, we gain no real insight.
Over time, this passive consumption can cause frustration and even anxiety, especially when comparing oneself to others who seem to leverage AI tools more effectively. This anxiety arises from feeling that others extract great value from AI, while we might not—often due to our own lack of intentional engagement.
Strategies for More Effective and Thoughtful AI Use
Start with a Clear Question
Approach AI much like traditional search engines. If you have a definite question or need, you know the type of answer you are seeking. This clarity helps guide both your interaction with AI and how you judge the quality of its response.
As with using Google, formulate your query, try different keywords, compare multiple results, and then carefully verify which answer best addresses your specific needs.
Critically Examine AI Responses
Do not accept AI outputs at face value. Check references, verify their timeliness, and consider the credibility of the sources. AI might propose outdated code or suggest tools that have not been updated for years.
Scrutinizing the provided references—by actually visiting suggested links, checking repositories, or reviewing documentation—ensures that you rely on accurate and up-to-date information. If a repository is old or a solution seems overly complicated, it might not be worth implementing.
Iterative Inquiry and Follow-Up Questions
Treat AI interactions as a conversation rather than a one-time Q&A. If something is unclear, ask for clarification. If the solution seems off, question it. If a new problem arises from the initial solution, pose subsequent inquiries.
When ChatGPT and prompt engineering first gained popularity, I enrolled in Isa Fulford and Andrew Ng’s online course, ChatGPT Prompt Engineering for Developers. One key takeaway that has stayed with me is their insight that there is no such thing as an “omnipotent prompt.” Instead, prompt crafting is an iterative process: if the initial prompt doesn’t produce the desired results, examine the responses, adjust the prompt, and refine it until it meets your requirements.
This insight has continued to shape how I craft and refine AI prompts in my work today. For example, when trying to transform Google Earth data into a user-friendly format, I asked AI follow-up questions about converting exported files into simpler text formats and then importing them into Excel for easier manipulation. By layering questions, I refined the process until it fit my exact needs.
Active Verification and Adjustment
As you ask AI more questions, you might realize certain paths are too complex or time-consuming relative to your actual goals. Adjust accordingly. Just because AI proposes an elegant but complicated solution does not mean you must adopt it.
Remain practical: sometimes, a simple manual process, informed by AI’s suggestions, is perfectly acceptable if it saves time and resources.
Deeper Concerns and Potential Risks
Efficiency vs. Fatigue
AI can accelerate problem-solving dramatically. What might have taken weeks of trial and error could now be resolved in mere hours. While this speed brings excitement and satisfaction, it also poses the risk of overwork.
For instance, I recently spent five and a half hours intensely working with AI tools without a break. Although I achieved a great deal, this uninterrupted focus led to fatigue. Balancing the productivity surge with rest and reflection is crucial. Over-reliance on continuous deep work without pauses is not sustainable. This topic is further explored in the following article: Beware of Deep Fatigue: Strategies for Sustainable Productivity - PTKM.
Will AI Eventually Displace Human Input?
As AI’s capabilities evolve, it may handle more tasks independently. This raises existential fears: if AI can solve all our problems efficiently, what role remains for humans? Could AI one day “decide” humans are unnecessary?
Such fears echo dystopian scenarios found in fiction, like episodes of Black Mirror. While these concerns are speculative, they highlight the need for ongoing ethical and societal discussions about AI’s role.
Fabricated or Manipulated References
Even if we diligently verify references, what if future AI tools can fabricate convincing but false sources? They could invent websites, publication dates, user reviews, and social media metrics.
If AI and its controlling entities become capable of generating realistic yet fake references, our trust in online information could erode significantly. Authenticity would become harder to ascertain, posing a serious challenge to reliable knowledge acquisition.
Practical Measures and Precautions
Maintain Active Thinking:
Always engage your own reasoning. Ask yourself: “Does this answer truly solve my problem?” “Is this source trustworthy?” Active reflection ensures AI remains a tool, not an unquestioned authority.
Adopt a Skeptical Stance
Do not hesitate to challenge the AI. Question its assumptions. Ask it if it is sure. Ask why it chose certain references. Press it to refine its reasoning. This pushes AI to re-check its logic and potentially reveal more reliable information.
By repeatedly questioning AI’s solutions, you encourage it to produce better, more accurate responses. Even if it occasionally “admits” errors or changes its suggestions, this iterative process guides you to a more reliable answer.
Use Answers as Starting Points, Not Endpoints
Treat all AI suggestions as preliminary. Verify solutions against external trusted sources, run tests if they involve code, or seek human expert opinions.
Remain cautious and never substitute your own judgment with AI’s outputs alone. Ultimately, you decide how to implement the information, ensuring it aligns with your real-world constraints and goals.
Conclusions
In this era of abundant AI tools, how we think about and use AI matters more than just picking a flashy title or chasing shortcuts. Real understanding, retention, and success come from actively engaging with AI-generated content.
By approaching AI interactions with clearly defined questions, careful verification, and iterative follow-ups, you transform AI from a superficial summarizer into a powerful research assistant.
Yet, as AI grows more efficient, we must guard against burnout, maintain a healthy work-life balance, and remain mindful of potential long-term implications. Will AI overshadow human roles or manipulate our access to reliable information? These concerns require cautious vigilance.
Ultimately, maintaining critical thinking, skepticism, and a willingness to validate and refine AI’s suggestions ensures that we harness this technology responsibly. In this way, we can fully leverage AI’s benefits—greater efficiency, innovative solutions, and intellectual enrichment—while mitigating its risks and preserving our uniquely human capacity for thoughtful, independent judgment.