For eleven days, the email remained in the drafts folder. incomplete. reluctant. “I understand budgets are tight,” “I don’t want to be unreasonable,” the typical softening that people do when requesting money they have already earned, is an example of a message that begins confidently and veers into apologetic language by the third paragraph.
Things didn’t change until a coworker casually mentioned entering her job title into ChatGPT during her lunch break. The prompt was entered. The number was revealed. Furthermore, that figure was not $72,000.
| Profile & Key Information | Details |
|---|---|
| Subject | Anonymous tech professional (composite account based on reported experiences) |
| Industry | Technology / Digital Marketing |
| Role at Time of Negotiation | Senior Content Strategist |
| Years of Experience | 7 years |
| AI Tool Used | ChatGPT (OpenAI) |
| Salary Before Negotiation | $72,000/year |
| Salary After Negotiation | ~$101,000/year |
| Raise Percentage | 40% |
| Outcome | Raise approved; informal HR warning issued |
| Reference Source | Payscale 2024–2025 Compensation Best Practices Report |
| HR Response | Verbal caution regarding use of unverified external data in formal negotiations |
| Location | Austin, Texas |
| Negotiation Format | Written email + follow-up meeting |
| AI Prompt Style Used | Detailed, role-specific, location-aware |
| Cross-Verification Done? | Yes — Glassdoor and LinkedIn Salary |
For an Austin-based senior content strategist with seven years of experience at a mid-sized private tech company, the AI returned between $98,000 and $105,000. The prompt wasn’t ambiguous; it included details about the company’s size, industry, years of experience, variable compensation plans, and even standard equity considerations.
Jessica Pillow, global head of total rewards at HR platform Deel, who has been answering inquiries regarding AI-assisted salary research on a regular basis, says that degree of specificity is important. “If your prompt isn’t very specific,” she has openly stated, “you’re going to get an answer that is just maybe not grounded in reality.” The specificity appeared to be going in the right direction in this instance.

The email was revised. The question landed with a quiet confidence that had not been present before, the language became clearer, and the number became bolder. The way a machine-generated figure can defuse the tension of a very human conversation is almost unsettling.
The gathering took place on a Thursday. A revised offer of $101,200 was on the table by Tuesday of the following week. 40% more than the salary that had been earned for the previous three years.
It would be easy to present this as a technological victory over corporate inertia, and in certain respects it was. However, the tale doesn’t stop there. A brief and uncomfortable conversation with an HR representative occurred about two weeks after the offer was accepted; it was not a formal reprimand, but it felt like one. It was explained that the source of the salary figures was the source of the concern. Internal concerns were reportedly raised by using AI-generated data in a formal negotiation without adequate verification.
According to a Payscale survey, 63% of employers reported that during the previous year, more employees were basing salary requests on erroneous or unverified information. It appeared that the HR division was well aware of the trend.
It was a mild warning. However, there was a real undercurrent to it, a sense that businesses are becoming more conscious of where their employees are obtaining their information. This kind of thing apparently doesn’t go unnoticed in Austin’s tech corridor, where salary benchmarking has always been competitive. The raise might have occurred in any case.
It’s also possible that the negotiation felt more like a challenge than a dialogue because the AI number arrived without a traceable, reliable source. It may not seem important, but that distinction is crucial.
Looking back, the experience showed that while AI can be a genuinely helpful starting point, perhaps even a surprisingly powerful one, it is not a negotiation strategy in and of itself. The model was credible enough to be used because the number it generated roughly matched what Glassdoor and LinkedIn Salary data also suggested.
The action that prevented the situation from becoming completely uncomfortable was cross-referencing. “It’s not the final answer,” Pillow states bluntly. That framing is correct; you still need to complete your homework. Instead of the proof, the AI was the assurance.
Here, there’s a bigger picture to think about. According to a Korn Ferry survey conducted in early 2026, AI experimentation in total rewards and compensation is growing, but most organizations are still in the early stages; 57% had not even started internal testing. AI is being used by compensation experts to gather and process data more quickly rather than to determine precise salary amounts for individual employees.
Therefore, the people seated across the table are still primarily depending on human judgment and conventional compensation structures, even though employees are entering negotiations armed with chatbot outputs. It’s an intriguing gap. It produces an asymmetry in information that can go either way.
Observing this unfold across industries, it seems that businesses and workers are going through a difficult time of transition, with both sides experimenting with what technology can and cannot do and neither side fully understanding the regulations just yet. Research on AI fairness that has been published in scholarly journals has discovered statistically significant differences in salary recommendations between various ChatGPT model versions, depending on the prompt’s wording, gender, and university background.
When asked from the employer’s perspective as opposed to the employee’s, the same question yielded significantly different results. It’s worth putting up with that inconsistency.
The elevator stands. The money is consistently and drama-free deposited every two weeks. However, the lesson isn’t really about the forty percent; rather, it’s about what happens when a new tool enters an old negotiation and the rules are still up for debate.
Technically, AI didn’t negotiate anything. It provided enough information for an individual to negotiate on their own behalf. That may be the most accurate way to characterize what it really accomplishes, which is to approach the conversation with less fear rather than as a substitute for it.
