You used ChatGPT to help write your resume.
You used AI to research the company.
You even used AI to help prepare answers to common interview questions.
Now you're wondering: Should I mention any of this in the interview?
I built Revarta almost entirely using AI tools—from code generation to content creation to workflow automation. I'm now building AI agents at Arkero.ai. So I've thought deeply about how to discuss AI usage authentically.
The answer isn't as simple as "yes" or "no." It depends on how you talk about it—and specifically, whether you demonstrate sophistication or dependency.
The Short Answer
Yes, you should mention AI usage in most cases. But not because it's impressive—because it's expected.
In 2025, 75% of knowledge workers use AI tools at work. Claiming you don't use AI looks either dishonest or out of touch. Neither is a good look.
The real question isn't whether to mention AI. It's how to mention it in a way that shows prompting sophistication, evaluation habits, and judgment about limitations—not dependency.
When You Absolutely SHOULD Mention AI
1. When They Ask Directly
If an interviewer asks "Do you use AI tools?" or "How do you use AI in your work?"—answer honestly and specifically.
These questions are increasingly common. Companies want to know you can leverage AI effectively. Dodging the question or being vague signals either dishonesty or inability to articulate your workflow.
Sophisticated response:
"I use different tools for different purposes. Claude for analysis with structured prompts—I provide explicit output formats and ask for confidence levels. Copilot for coding, but I treat it like code from a junior dev: I never commit code I can't explain, and I run tests before reading. Perplexity for research because it cites sources, which makes my '10% audit' more tractable—I verify 10% of claims against primary sources."
2. When AI Genuinely Improved Your Work
If AI helped you achieve better outcomes, mention it. But frame it correctly with your prompting approach and evaluation methodology:
Sophisticated framing:
"I used Claude to synthesize research from 50 sources. My prompting approach: I provided explicit output constraints—'identify the 3 most significant trends, cite specific quotes with source references, and rate your confidence 1-10 on each claim.' Then I ran my standard 10% audit on the citations. The AI missed a subtle pattern because it grouped by keyword similarity rather than semantic meaning, which I caught and corrected. The combination of AI synthesis and human verification gave us a more comprehensive analysis in half the time."
Weak framing:
"AI did my research for me, so I had more time for other stuff."
The difference: The first shows you as the critical thinker with a systematic approach. The second shows AI as doing your job while you're... elsewhere.
3. When It Demonstrates Problem-Solving
Using AI to solve a problem creatively shows initiative—especially when you can discuss your prompting strategy:
"Our team was behind on documentation. I built structured prompts that could generate first drafts from our code comments—I used few-shot prompting, providing 3 examples of our documentation style first. Engineers reviewed and refined, and I ran spot checks on the first 20% to establish an error rate. Once I confirmed 90%+ accuracy, we scaled the process. We cleared a 6-month backlog in 3 weeks."
This positions you as someone who understands prompt engineering and has a methodology for validating AI outputs.
4. When Discussing Productivity or Process Improvements
AI is a legitimate productivity tool. If you've improved processes using AI, that's worth mentioning—especially with specific prompting techniques:
"I reduced our report generation time from 4 hours to 90 minutes by using AI for initial data summarization. My prompt structure matters: I provide the data, specify the output format explicitly, and ask the AI to flag any data points where it's uncertain about interpretation. I then focus my time on the uncertain areas and the strategic recommendations—the parts that require business judgment AI can't replicate."
The Prompting Sophistication That Impresses
When discussing AI usage, demonstrating these skills signals you're not just an AI user—you're an effective one:
Structured Prompting
"I don't just ask AI questions—I provide explicit output formats. Instead of 'summarize this document,' I prompt: 'Identify the 5 key points, provide a supporting quote for each, and rate your confidence 1-10.' The structure makes outputs immediately usable."
Context Provision
"I've learned that context is everything. For any analysis task, I front-load context documents—background information, constraints, terminology. This transforms generic outputs into relevant ones. I also explicitly tell the AI what it doesn't know: 'You don't have access to our sales data or internal strategy.'"
Evaluation Methodology
"I track accuracy for any new AI use case—the first 10-20 instances tell me whether to trust that workflow. For research, I run a '10% audit.' For code, tests come before reading. I've built 'evals' into my process."
Iterative Refinement
"I rarely accept first outputs. My workflow: initial prompt, critique what's missing, ask for specific improvements, then run a 'red team' pass—'what would a skeptic say about this analysis?' This dramatically improves output quality."
When You Should Be CAREFUL Mentioning AI
1. For Core Competencies They're Hiring For
If they're hiring you to write, and you mention AI writes everything—that's a red flag.
If they're hiring you for analysis, and AI does all your analysis—why do they need you?
The rule: AI should augment your core skills, not replace them. The human value-add is judgment, verification, and strategic thinking.
Weak:
"I use ChatGPT to write all my code."
Sophisticated:
"I use Copilot to accelerate boilerplate—it probably saves me 30-40% of keystrokes. But I treat every suggestion like code from a junior developer: good starting point, needs careful review. I never commit code I can't explain, and I'm especially skeptical of AI-generated error handling and security logic. AI writes faster, not better—human review is non-negotiable."
2. For Creative Work That Should Be Yours
If they ask about your portfolio, writing samples, or creative work—and AI created it—tread carefully.
It's fine to use AI for ideation, outlining, or initial drafts. But if you're presenting AI-generated work as your own original thinking, you're on ethically thin ice.
Sophisticated approach:
"I use AI for brainstorming and initial structure. This piece started as an AI outline, but I used iterative prompting to explore different angles first. The voice, insights, and final form are mine. I've found AI is good at generating options, not at judging which option is right—that requires the human understanding of context, audience, and purpose."
3. For Confidential or Sensitive Work
Some industries have strict AI policies—legal, healthcare, finance, government. Some companies prohibit AI for confidential information.
Research the company's stance before the interview. If you're unsure, frame cautiously:
"I use AI for personal productivity tasks, but I'm careful about data governance. I never input confidential or proprietary information into AI tools. I treat AI context windows like public spaces—anything I put in could theoretically be seen by others. I follow whatever policies my employer has, and I'm proactive about understanding those boundaries."
4. For Work Requiring Subject Matter Expertise
If they're hiring you as an expert, don't make it sound like AI is the expert:
Weak (for an analyst role):
"I ask ChatGPT to explain financial concepts I don't understand."
Sophisticated:
"I occasionally use AI to quickly refresh on concepts or sanity-check my reasoning—similar to consulting documentation. But the analysis and recommendations come from experience and business judgment that AI doesn't have. In fact, I've caught AI making confident but incorrect claims about financial regulations, which reinforced my practice of never trusting AI on domain-specific rules without verification."
Stop Guessing. See Exactly How You Sound.
Reading about interviews won't help you. Speaking out loud will.
Get specific feedback on what's working and what's killing your chances. Know your blind spots before the real interview.
The "AI for Interview Prep" Question
What about using AI to prepare for the interview itself? Should you mention that?
The honest answer: It depends on context, but generally yes—if framed correctly with sophistication:
"I used AI to research the company and structure my thoughts. My approach: I asked Claude to analyze the job description and identify likely behavioral questions, then I drafted my answers in my own words. I used AI for one more pass—asking it to 'identify gaps in my answer' and 'what would a skeptical interviewer push back on.' The examples are my real experiences, and I've practiced delivering them in my own voice. AI helped me prepare systematically; it's not speaking for me right now."
This shows:
- Strategic use of available tools (good)
- Specific prompting methodology (sophisticated)
- Clear distinction between preparation and performance (mature)
- Authenticity about your own experiences (trustworthy)
What NOT to say:
"ChatGPT helped me craft the perfect answers to your questions."
That makes the interviewer wonder if they're talking to you or to ChatGPT.
What Hiring Managers Actually Think
I've talked to dozens of hiring managers about this. Here's what they consistently say:
They WANT to hear:
- Prompting sophistication (structured prompts, context provision, iterative refinement)
- Evaluation methodology (how you verify AI outputs systematically)
- Judgment about limitations (what AI can't do and where you don't trust it)
- Human value-add (what YOU bring that AI doesn't)
They DON'T want to hear:
- Vague enthusiasm ("AI is amazing for everything")
- Over-reliance signals ("AI does most of my work")
- Lack of verification ("I just trust what it outputs")
- Generic answers ("I use ChatGPT like everyone else")
The ideal impression:
"This candidate has a sophisticated approach to AI—structured prompting, systematic evaluation, clear understanding of limitations. They use AI as a power tool, not a crutch. I can trust them to leverage AI effectively without creating problems."
Industry-Specific Guidance (With Sophistication)
Tech / Software Engineering
AI usage is expected and normal. Focus on your prompting approach and verification methodology:
"Copilot probably accelerates 30-40% of my typing, but 100% still goes through my review. I've developed specific prompting patterns: for complex logic, I add constraints in comments—'handle null cases,' 'be thread-safe,' 'follow our error logging format.' These constraints dramatically improve output quality. I caught a security vulnerability in an AI suggestion last month—it matched patterns without understanding security implications. Now I'm especially skeptical of authentication and error handling code."
Marketing / Content
Tread carefully—they want your voice, not AI's:
"AI for structure and ideation, humans for voice and judgment. My prompt approach: 'Give me 10 angles for [topic], rated by controversy level' or 'Outline with hooks, avoiding [common clichés].' I never publish AI prose directly—the tell for AI content is sameness. The human value-add is personality, unexpected connections, and understanding our specific audience in ways AI can't."
Finance / Consulting
Emphasize analysis judgment and verification methodology:
"AI accelerates data processing and initial pattern recognition. But I've learned to prompt explicitly for uncertainty: 'What assumptions are you making?' and 'Rate your confidence 1-10 on each conclusion.' The recommendations and strategic insights come from experience and business judgment AI doesn't have. I run what I call a 'sanity check loop'—does this output make sense given what I know about the market?"
Healthcare / Legal
Mention AI cautiously and emphasize compliance and verification:
"I use AI only for non-patient/non-client work—research synthesis, administrative tasks. I treat AI context windows like public spaces—nothing confidential goes in. For any factual claims, I verify against authoritative sources, because AI confidently cites papers that don't exist and invents plausible-sounding but incorrect information. I follow organizational policies on AI usage and I'm proactive about understanding those boundaries."
The Framework for Any AI Mention
When discussing AI, use this formula that signals sophistication:
Tool + Prompting Approach + Outcome + Verification Method + Human Judgment
Examples:
"I used Claude for synthesizing research with structured prompts—explicit output format, source citations required, confidence levels requested. Cut my research time in half. But I always verify key claims against primary sources because AI sometimes cites papers that don't exist. The strategic insights and recommendations are mine."
"I used Copilot for test generation, providing few-shot examples of our testing patterns first. Improved our test coverage from 40% to 70%. But I review every test to ensure it's actually testing meaningful behavior, not just achieving coverage metrics. AI pattern-matches; I validate intent."
"I used AI for customer feedback analysis with explicit categorization instructions—'group by underlying intent, not by keywords.' Helped identify patterns in 500 responses. But I manually reviewed edge cases because AI clusters by language similarity rather than semantic meaning. I caught a pattern the AI missed that became our #2 product priority."
Red Flags to Avoid
These statements make hiring managers nervous:
❌ "AI writes my [core job function]"
❌ "I couldn't do my job without AI"
❌ "AI is better at [X] than humans"
❌ "I don't verify AI outputs—it's usually right"
❌ "I use AI for everything"
Instead, aim for:
✅ "AI accelerates [specific task] with [prompting approach], so I can focus on [strategic work]"
✅ "AI handles [routine work], but [complex work] requires human judgment I provide"
✅ "AI gives me a starting point; I have a verification methodology before anything goes out"
✅ "I've catalogued where AI helps and where it creates problems—I don't trust it for [specific limitations]"
The Bottom Line
Yes, mention AI. But mention it like a sophisticated professional who understands prompting strategies, has evaluation methodology, and knows AI's limitations.
The candidates who impress hiring managers in 2025 aren't the ones who use the most AI. They're the ones who demonstrate prompting sophistication, systematic verification, and clear judgment about when AI helps versus hurts.
Show that you're the thinking human who leverages AI strategically—not an AI operator who hopes the outputs are correct.
For a complete framework on discussing AI in interviews, including the PTOJ method and advanced prompting strategies, see our comprehensive guide to talking about AI in job interviews.
Ready to practice talking about AI in interviews?
Try Revarta free - no signup required and get feedback on how you frame your AI usage with sophistication.
Because knowing what to say is different from being able to say it confidently.



