TECH EXCLUSIVE: ‘AI Should Power Healthcare Workforce & Not Replace Human Judgment’, Says Avinav Nigam

Amit Kakkar
27 Min Read
A glimpse of the AI-Assistant Maitha displayed at World Health Expo 2026.------------Supplied Image

Inside ‘Maitha’: The World’s First Nursing Workforce AI Assistant Transforming Healthcare Hiring

As artificial intelligence continues to reshape global healthcare systems, the conversation is increasingly shifting from technology itself to how it can responsibly strengthen the healthcare workforce. Avinav Nigam, Founder and CEO of TERN Group, an AI-powered workforce infrastructure platform in the UAE, has been at the forefront of this transformation through Maitha, the world’s first Nursing workforce AI Assistant, for managing the nursing workforce, developed in collaboration with Emirates Health Services (EHS). Designed to support recruitment, skills assessment, and long-term workforce development, Maitha represents a new generation of AI systems built specifically for complex healthcare environments.

In this exclusive conversation with Emirates Reporter, Nigam discusses how AI can function as workforce infrastructure, the challenges of building trust in AI within healthcare institutions, and how such systems could shape the future of workforce planning and healthcare policy. Here is the full interview.

E.R- Maitha is described as more than just a digital assistant. How do you see its role inside a healthcare system — as a tool, a workforce companion, or a decision-support system? And how much independence should it actually have?

Avinav Nigam– Maitha is workforce infrastructure, not a tool. That distinction matters.

Tools get used occasionally when someone remembers they exist. Infrastructure becomes part of how work gets done daily. Maitha sits at the intersection of three roles, and it needs all three to work:

As decision-support: Maitha helps healthcare leaders understand who’s ready for what roles, where skill gaps exist, and how to plan capacity months ahead rather than days. When EHS assessed 100s of Emirati youth nurses across 20 facilities using our platform, the AI provided standardized competency evaluation across clinical reasoning, communication skills, and leadership potential. That’s decision support, giving leaders information they couldn’t gather manually at that scale.

As workforce companion: For healthcare professionals themselves, Maitha conducts career development conversations, identifies upskilling needs, and tracks progression over time. It’s not replacing human mentorship, it’s making mentorship scalable. A nursing director can’t have meaningful development conversations with 450 nurses regularly. Maitha can, and it creates visibility that helps directors focus their time where human judgment matters most.

As recruitment and onboarding system: Maitha streamlines the entire hiring process, from conducting AI-powered video interviews to organizing training programs to enabling virtual assessments. During our EHS rollout, we reduced manual screening effort by over 90% while maintaining standardized, bias-free evaluation. That’s not assistance, that’s infrastructure replacing fragmented, inconsistent processes.

On independence: Maitha should have full autonomy in data collection and pattern identification, but zero autonomy in final decisions about people’s careers or employment.

AI should independently conduct assessments, flag readiness levels, identify skill gaps, and recommend development pathways. But hiring decisions, promotion decisions, deployment decisions, those stay with humans who understand context, organizational culture, and individual circumstances the AI can’t see.

We built explicit escalation pathways. When Maitha encounters edge cases, complex employment histories, borderline competency scores, situations requiring judgment, it flags them for human review rather than forcing a decision. In regulated healthcare, that’s not optional. It’s how you maintain accountability. There’s 10s of pages of such guardrails within Maitha’s AI “prompt code”.

The goal isn’t maximum AI independence. It is an optimal division of labor: AI handles consistency, scale, and pattern recognition. Humans handle judgment, context, and accountability.

E.R- While collaborating and working with Emirates Health Services, what was the most difficult challenge while introducing AI — managing technology expectations, meeting regulations, or gaining trust from people?

Avinav Nigam– Gaining trust from people. And it wasn’t close.

The technology worked. Regulations were clear and we built for them from day one. But convincing healthcare professionals that AI assessment would be fair, unbiased, and actually useful – that took deliberate design and transparent communication.

The trust challenge had three dimensions:

One: Overcoming skepticism about AI fairness. Healthcare professionals have seen tech projects promise transformation and deliver frustration. When we told nurses they’d be assessed by AI, the question was: “Will this be biased? Will it understand my experience?” We made assessment criteria completely transparent, everyone knew exactly what would be evaluated: clinical judgment, communication, leadership, national service commitment. Same criteria for everyone, no variation based on who’s conducting the interview. 

The pilot proved it: 100s of nurses assessed across 20 facilities, 92%+ satisfaction rating. Trust came from transparency, not claims about AI quality.

Two: Demonstrating value, not just novelty. Technology for technology’s sake gets rejected quickly. People wanted to know: will this help my career? Identify my strengths? Connect me to development? We designed Maitha to provide actionable insights, not just scores. 

After assessment, nurses received reports showing strengths, development areas, and recommended pathways. We identified 22% showing high leadership potential based on consistent leadership language and strong narratives. That’s useful information for individuals and organizations.

Three: Respecting professional judgment. Healthcare professionals are rightly protective of clinical judgment. They needed to see Maitha wasn’t replacing their expertise, it was helping leaders deploy that expertise more effectively. We positioned AI explicitly as assistive: strengthens human judgment by providing better information, but humans make final decisions. Hiring managers aren’t told what to do, they get structured insights that make decisions 10x faster and more informed.

What worked: Starting with volunteers, not mandates. Sharing results transparently. Showing AI identified real patterns – 93.7% expressing national service pride, recurring geriatric training needs. 

Trust in AI doesn’t come from technical performance. It comes from demonstrating the system is fair, transparent, and actually useful.

E.R- Many healthcare leaders worry that AI can make systems more complicated instead of simpler. How did you ensure that TERN’s platform and Maitha fit into existing EHS systems without disturbing daily workflows or decision-making structures?

Avinav NigamWe built Maitha to eliminate complexity, not add to it. That required understanding EHS’s existing workflows first, then designing AI to fit into them, not force new ones.

The integration approach:

One: Start with pain points, not capabilities.

We didn’t ask “what cool AI features can we deploy?” We asked “where are manual, time-consuming, inconsistent processes causing problems today?”

EHS was managing workforce decisions across 134 facilities in the Northern Emirates. Hiring, credentialing, assessment, and development were handled through disconnected systems. Recruitment alone averaged 3-6 months per nursing position through traditional channels. That’s not a technology problem, it’s a coordination problem.

Maitha addresses coordination by consolidating fragmented processes: conduct assessments, verify credentials, identify readiness, track development – all in one system. From the user perspective, it’s simpler because they’re not juggling multiple platforms and manual handoffs.

Two: Make AI invisible to end users where possible.

Nurses participating in Maitha assessments don’t need to understand how the AI works. They receive a secure link, complete a video interview, and get a structured report. The interface is straightforward, answering questions about clinical scenarios, leadership experiences, and professional goals. The AI evaluation happens in the background.

For hiring managers, Maitha delivers structured candidate reports with clear readiness indicators, strengths, and development needs. They don’t see “AI scores”, they see actionable workforce intelligence formatted the way they’re used to reviewing candidates.

Three: Build for existing infrastructure, not replacement.

We integrated with EHS’s credential verification systems, HR databases, and facility management structures. Maitha doesn’t require ripping out existing systems. It sits on top as a coordination layer, pulling data from existing sources and pushing structured insights back into existing workflows.

The assessment reports feed into EHS’s workforce planning processes. The development recommendations connect to their training programs. The readiness indicators inform deployment decisions. Integration, not replacement.

Four: Prove value quickly, then scale.

We started with a focused pilot: 111 Emirati youth nurses across 20 facilities. Clear scope, defined timeline, measurable outcomes. That allowed EHS to see results : 90%+ reduction in manual interviews, identification of high-potential talent, standardized competency insights before committing to system-wide deployment.

Quick wins build confidence. They also reveal integration issues early when they’re easier to fix.

The complexity test:

If adding AI makes someone’s job harder, you’ve designed it wrong. When we reduced manual screening effort by over 90% while delivering better candidate insights, that’s simplification. When we identified leadership potential across an entire nursing cohort in days instead of years, that’s removing complexity from succession planning.

AI should feel like removing friction, not adding features. That’s how you ensure adoption without disrupting the workflows people depend on.

E.R- If Maitha becomes a long-term AI companion for healthcare professionals, how do you make sure people don’t depend on it too much, while still keeping it useful and valuable — especially for career growth, skills evaluation, and wellbeing?

Avinav NigamHealthy dependence is the goal, not independence from AI. The question isn’t “how do we prevent dependence?” It’s “how do we ensure dependence is on infrastructure that makes people more capable, not less?”

Healthcare professionals should depend on Maitha like they depend on credentialing systems or clinical decision support – as infrastructure that makes work better, not as replacement for judgment

Here’s how we design for that:

One: AI provides information, humans make decisions. Maitha can say “you’re ready for senior roles but would benefit from leadership development.” It cannot decide if you get promoted. That’s a human decision involving team dynamics, organizational needs, career goals AI doesn’t see. By keeping AI in the information role, professionals develop their own judgment about career moves.

Two: Focus on identifying gaps, not fixing them. When Maitha identifies the need for geriatric training, a pattern we saw across EHS cohorts, it points to the gap. The nurse chooses whether to pursue training, how to apply it, when to integrate it. Learning happens through human effort, not AI automation.

Three: Make AI contributions visible, not hidden. When professionals see how assessments work-criteria is used, how responses are evaluated, where strengths and needs lie, they internalize frameworks and apply independently. After Maitha assessments, nurses better understood what clinical reasoning and leadership readiness look like. The AI taught evaluation criteria, making them better self-assessors over time.

Four: Mandatory human check-ins. Maitha complements supervisor conversations, doesn’t replace them. After AI assessments, nurses still meet managers to discuss results, set goals, plan development. AI provides structured input, but conversations happen.

On wellbeing: AI identifies patterns on workload distribution, burnout signs, capacity pressures, but wellness interventions are human-led. If Maitha flags concerning workload patterns, it triggers human review and action, not automated responses.

The dependency we avoid: People stopping critical thinking because “AI will handle it.” We design against this by never allowing AI to make final career decisions, always showing reasoning, requiring human confirmation for high-stakes outcomes.

The dependency we encourage: Trusting AI infrastructure for consistency and pattern recognition so people focus on judgment, relationships, and growth. If a nurse depends on Maitha to track skill development, identify advancement readiness, connect to opportunities, that’s infrastructure working as designed. They’re depending on information and coordination, not on AI to make decisions or do their development work.

E.R- The UAE is moving quickly in AI governance and regulation. How has your experience with EHS influenced the way you build AI systems that are not only smart, but also trusted and aligned with national priorities?

Avinav NigamWorking with EHS reinforced something we’ve learned operating in regulated European markets: trust and national alignment aren’t features you add later. They’re architectural decisions you make from day one.

What EHS taught us about building trusted AI:

One: Transparency creates trust faster than performance. We didn’t lead with “95% accurate AI.” We led with “here’s what we evaluate, how we score, why we identified these needs.” That mattered for Emiratization—EHS needed to trust AI assessment was fair and measuring actual readiness. When 93.7% of assessed nurses showed strong national service pride and long-term commitment, it validated the AI was capturing what matters: values alignment, not just technical skills.

Two: Audit trails aren’t optional. Every Maitha assessment generates a complete record-questions, responses, criteria, scoring reasoning. If anyone asks “why was I assessed this way?” we show them. In systems affecting careers and livelihoods, explainability is prerequisite for trust.

Three: National priorities shape what AI optimizes for. UAE’s focus on Emiratization, youth development, and locally-led healthcare directly shaped Maitha’s design. Assessment includes national identity and service commitment as core competencies alongside clinical skills, not secondary factors. The platform conducts assessments in Arabic and English, adapts to UAE context, identifies Emirati leadership potential. These aren’t add-ons—they’re design requirements driven by national priorities.

Four: Data governance matters as much as utility. EHS handles sensitive workforce data. AI’s value comes from analysing at scale, but under strict governance: who accesses what, how it’s used, retention, protection. We built for UAE data residency and sovereignty, role-based access, audit logging from the start. Systems designed for governance adapt faster than those retrofitting compliance.

Five: Stability matters more than speed in healthcare. Consumer tech deploys fast, breaks things, iterates. Healthcare workforce systems affecting patient care need reliability first. The EHS pilot ran across 20 facilities with 111 nurses – controlled deployment with clear criteria, not rushed beta tests. That builds trust with users and regulators.

Building for UAE and European regulated markets simultaneously forces us to meet the highest standards globally. Markets with clear AI governance create better AI. UAE’s approach of moving quickly with clear frameworks, creates an environment where trusted AI deployment works.

E.R- In the future, do you think AI agents like Maitha could help shape healthcare policies by identifying workforce risks or gaps? How should governments prepare for AI playing such a strategic role?

Avinav NigamYes, AI Employees like Maitha could help shape healthcare policies, but only if governments treat AI workforce intelligence as infrastructure, not just analytics. We call them employees because they are to be held accountable for their deliverables, and the contribution to the other human employees and the system at large. 

What AI uniquely contributes to policy:

Pattern recognition at population scale. Our EHS pilot assessed 100s of nurses where patterns emerged around geriatric training needs, national service commitment, 20%+ showing leadership potential. Scale that to tens of thousands across all Emirates, and AI identifies where skill shortages will emerge 18-24 months before crisis, which specialties need training investment, whether Emiratization develops leadership-ready talent or just hits quotas. That’s policy-relevant intelligence traditional surveys can’t deliver with the same speed or granularity.

Early warning systems. Global data shows hospitals turn over 100%+ of the workforce in five years, each turnover costs $60,000+. AI can identify patterns early: if burnout indicators rise in specific facilities, policymakers intervene before attrition accelerates. If career pathways aren’t developing as intended, training programs adjust in real-time.

Evidence-based capacity planning. Healthcare workforce planning relies on historical data and projected growth. AI adds real-time capability insights: not just “we need 500 more nurses” but “we need these specific competencies in these locations, and here’s where development is on track versus falling short.” That’s proactive policy, not reactive response.

How governments should prepare:

Build data infrastructure first. AI needs integrated data-hiring, assessments, training, deployment, retention. UAE’s unified national health platform creates a foundation for AI-powered workforce intelligence.

Establish governance frameworks. AI identifying workforce risks becomes dangerous if misused. Need clear rules on what data AI analyzes, how insights inform policy, who has access, how individuals are protected.

Create feedback loops. AI insights are only useful if they inform decisions. Build structured processes: regular reviews, clear pathways from insight to policy, mechanisms to test intervention effects.

Maintain human judgment in policy. AI informs, doesn’t make policy. When Maitha identifies training needs, that’s input to budget discussions. Decisions on allocation and priorities remain with policymakers who understand the context AI doesn’t see.

Invest in AI literacy. Train healthcare leadership to interpret AI intelligence—understand confidence levels, recognize limitations, know when human judgment should override.

Done right, AI agents like Maitha don’t just help individual healthcare systems. They create nationwide visibility into workforce health, enabling policy decisions based on real-time intelligence rather than delayed surveys and historical trends.

E.R- Healthcare workforce shortages are a global problem. From your perspective, what structural changes — beyond faster hiring — are needed for health systems to build long-term workforce stability?

Avinav Nigam– Faster hiring solves the wrong problem. Healthcare workforce instability exists because systems are designed for transactional staffing, not sustainable careers.

What actually matters:

One: Shift to workforce capacity planning. Stop the cycle: wait for someone to quit, panic, hire, repeat. Instead, understand capacity continuously—who’s ready for what, where skills exist, when transitions are likely and plan 12-24 months ahead. When we assessed 100s of nurses at EHS, we created visibility into future capacity: who’s ready for leadership soon, where development investments should focus. That allows proactive planning instead of reactive firefighting.

Two: Make career progression transparent and merit-based. Healthcare professionals leave not because they dislike the work, but because they can’t see how to advance. When progression depends on subjective opinions or informal networks, talented people leave. Standardized AI assessment creates transparency, everyone evaluated against the same criteria. We identified 22% showing high leadership potential at EHS, talent the system might have missed. When people see fair pathways, retention improves without increasing compensation.

Three: Design for coordination, not just capacity. Adding headcount into badly coordinated systems makes things worse. Hospitals hire aggressively but turnover stays high because underlying coordination problems remain unsolved. Better approach: build infrastructure connecting sourcing, credentialing, assessment, deployment, and development. When these function as integrated systems, workforce utilization improves and burnout decreases, often without adding headcount.

Four: Invest in development infrastructure, not just hiring. Most healthcare systems spend heavily on recruitment, minimally on structured development. Result: hire well, lose people to organizations that invest in growth. Maitha demonstrates the alternative: AI infrastructure supporting career development throughout employment—continuous assessment, personalized upskilling, visible progression tracking. When development infrastructure exists, retention becomes a natural outcome.

Five: Address burnout as a systems problem, not an individual resilience issue. Burnout is structural, caused by poor coordination, unclear expectations, inequitable workload distribution. AI can identify structural burnout risks: which departments have unsustainable patterns, where deployment creates pressure. That enables systems-level interventions: better scheduling, fairer workload distribution. Fixing systems reduces burnout more effectively than teaching stress management.

Six: Build for workforce continuity, not flexibility. Modern workforce management emphasizes temp staff and just-in-time hiring. In healthcare, this destroys continuity of care and burns out permanent staff. Better approach: build stable core teams, invest in their development. When EHS found 90%+ expressing long-term commitment to public service, that’s a strategic asset. Building around that commitment creates stability temporary staffing never will.

Healthcare systems should spend less time hiring faster and more time building infrastructure for planning, transparent progression, coordination, development, and stable teams. Those changes are harder than faster recruitment but create long-term stability. Faster hiring just speeds up the turnover cycle. Structural change breaks it.

E.R- When governments and large health systems adopt AI-driven platforms, concerns about data privacy and accountability often arise. How do you address these concerns while still enabling meaningful insights and predictions?

Avinav NigamPrivacy and insight aren’t competing objectives when AI is designed correctly. They’re complementary requirements that both need to be met.

How we build for both:

  • Data minimization. Maitha collects only what’s needed, clinical competency, communication, leadership, development needs. Not personal lives, unrelated health conditions, or factors that shouldn’t influence careers. When users know exactly what’s collected and why, privacy concerns decrease. Transparency about data use matters more than technical security measures alone, though those matter too.
  • Role-based access. Hiring managers see candidate reports. Training coordinators see development needs. Executives see aggregated trends. Individual nurses control who accesses their development plans. During the EHS pilot, results went to relevant stakeholders based on roles, participants knew who would see what.
  • Aggregate insights where possible. Policy questions like “where do we need geriatric training?” don’t require individual identification. When we reported 22% leadership-ready or 93.7% service commitment, those insights didn’t need names. Value is in the pattern, not the person.
  • User control and consent. Healthcare professionals control their data, can request reports, see evaluations, understand what feeds into planning, and correct inaccuracies. EHS nurses participated voluntarily after understanding what assessment involved.
  • Auditability creates accountability. Every data access, assessment, decision gets logged. If someone questions why they weren’t selected, the system shows what informed the decision and who made it. This prevents arbitrary decisions disguised as “AI recommendations.”
  • Data residency and sovereignty. For government systems, data stays local. EHS workforce data in UAE infrastructure under UAE governance, not foreign clouds.
  • Differential privacy for research. For policy analysis, we apply mathematical techniques preserving aggregate patterns while making individual identification impossible.
  • The false tradeoff: People frame this as “privacy versus useful AI.” Poorly designed AI creates that tradeoff. Well-designed AI delivers insights while protecting privacy. To identify geriatric training needs, you need aggregate competency data, not individual names or histories.
  • Human accountability remains essential. AI identifies patterns, humans act on them. When Maitha recommends leadership development, that goes to supervisors who understand the context that AI doesn’t, such as, career goals, family circumstances, team dynamics. If decisions are wrong, there’s a person accountable.

Done right, AI-driven workforce platforms don’t threaten privacy. They enhance it by replacing opaque, inconsistent human processes with transparent, auditable, governed systems where individuals have more control and visibility into how their information is used.

AS TOLD TO EMIRATES REPORTER

Your Skills, Our Spotlight. Email editor@emiratesreporter.com to Get Featured.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

We are a fully approved news and PR website and social media with Media license from UAE MEDIA COUNCIL. (UAE Media Council Approved ML-02-04-6015811). Majority of the content published on EMIRATESREPORTER.COM and its related social media is a supplied content from various Public Relations and Marketing Agencies in UAE & abroad. The pictures used are either supplied pictures by those PR and Marketing agencies or used by us from subscribed picture providing websites for depiction, expression and illustration only. WAM NEWS and its pictures are given due credit and courtesy on our website. We refrain from publishing anything political, religious or controversial in nature and dedicated to ethics and values of UAE strictly following UAE media council guidelines. All the interviews/pictures published on our platforms are either supplied by PR & Marketing agencies with client consent or done by us and are meant for the purpose of highlighting achievements of any particular individual interviewed without anything controversial and is purely business promotion related. This website and related social media is created to support PR and Marketing agencies and their respective clients in the region.