01 / About

A technical diagnostic, before the sales narrative

Can I Hire an AI? turns a role description into a structured report in under five minutes — automation score, task-level breakdown, 36-month ROI projection, and regulatory readiness. No consultant. No €5K discovery phase.

Start free analysis

Multi-node Agentic AI pipelineDeterministic 36-month ROI modelJurisdiction-aware regulatory screeningNo training on your inputs

02 / The problem

Three questions that deserve real answers

AI automation is happening now, in companies your size, in roles that look a lot like yours. The barrier isn't capability — it's clarity.

01

Which role do I start with?

02

What would it actually cost to automate this?

03

Is it worth it? When do I break even?

A frustrated business leader sitting at a desk
Most companies know they should be doing something with AI. Very few know where to start.

03 / Scope

Analyze a role, a department, or a process

You might be thinking about a single role, a whole department, or a specific process. The tool works at all three levels — you pick the scope that matches the question you're trying to answer.

A specific role

"Sales Development Representative", "Finance Manager", "Customer Support Agent". The analysis breaks the role into its component tasks and assesses each one independently.

A full department

"Customer Success team of 8", "Finance & Accounting department". Useful when you want to understand the automation landscape across a function before deciding where to focus.

A task or process

"Monthly invoice reconciliation and supplier payment cycle". Useful when you already know the process you want to target and just want a rigorous assessment of what's automatable. This scope produces hours-recovered estimates — not monetary ROI — since the work spans roles and can't be tied to a single salary baseline.

canihireanai.com/analyze

Step 1 of 5

What do you want to analyze?

Interactive preview

Step 1 of 5 — choose the level of analysis. The rest of the form adapts accordingly.

04 / The form

Designed for humans, not for technical people

We invested as much engineering effort in the form as in the analysis engine behind it. Designed for the person who knows the role — not the person who knows AI.

  • No AI jargon, no dropdowns full of terminology
  • One plain-language question at a time
  • Most users finish in under a minute
  • Describe the work — the system handles the rest
The form guides users through the analysis in under a minute.

05 / Ethics

Not a tool to decide who to fire

That framing gets the whole thing backwards. The question isn't "can a machine replace this person?" The question is: how much of this person's time is spent on repetitive, low-judgment tasks — so they can spend more time on the work that actually requires a human?

Not this

This

Evaluating whether a person's job should be eliminated

Identifying how much of a role is repetitive, low-judgment work that automation can absorb

A headcount reduction tool

A capacity-recovery tool — freeing human hours for higher-value work

A verdict on whether a person is replaceable by AI

A task-level analysis of which hours follow predictable patterns vs. require human judgment

A justification for layoffs

A starting point for a conversation about where your team's time is actually going

06 / Method

A structured analysis method

The role description is normalized into a structured profile, specialist nodes run in parallel across four independent dimensions, and only then is the result assembled. Financials stay deterministic and inspectable — not hidden inside generated prose.

Context-aware profile

Role, country, sector, salary, and workload are treated as one working brief that informs every downstream step.

Parallel specialist nodes

Task decomposition, regulatory screening, solution design, and rollout planning run as separate reasoning steps before synthesis.

Deterministic financials

Savings, ROI, and payback use a fixed model. Inputs and assumptions remain inspectable in the result view.

07 / Governance

Security and regulation by design

Defense-in-depth

Free-text input from anonymous users is an attack surface. Three independent layers run before any LLM sees your data.

01

Deterministic screening

Regex guardrail scans for PII signatures (emails, phones, URLs) and prompt injection patterns. Pure code — no model, no inference cost.

02

Sanitization

Unicode normalization, control character stripping, length limits. The model sees a clean, bounded string — not raw user input.

03

OpenAI Content Safety

OpenAI runs independent PII and jailbreak detection at the infrastructure level. A separate backstop if anything slips through.

The same controls apply to every AI solution we build for clients. Data handling, prompt safety, and infrastructure-level guardrails are engineered in from day one — not retrofitted.

Regulatory screening runs in parallel, not as an afterthought

The most avoidable failure mode: discovering regulatory constraints after the system is built. Most agencies skip this. We screen against the full context: country, sector, tasks, and organization type.

GDPR

Data protection obligations

Personal data in automated pipelines triggers obligations around lawful basis, data minimisation, and human oversight. Flagged where relevant.

EU AI Act

High-risk classification + documentation

Employment decisions and safety-critical systems trigger compliance requirements. The screening checks classification and documentation obligations.

National rules

Sector and jurisdiction-specific constraints

German co-determination, French union consultation, healthcare data rules — jurisdiction-specific constraints checked against your declared country and sector.

08 / Outputs

Five outputs, one coherent readout

The result helps an operations or leadership team decide whether a workflow deserves deeper investigation — not to close an implementation deal on the spot.

How the economic model works

The model runs month-by-month projections over 36 months. Every month: (annual productivity value / 12 × adoption rate) − monthly recurring cost. Year-1 ROI accumulates months 1–12. Steady-state ROI uses months 25–36. Payback is the first month where cumulative position turns positive.

Adoption rate ramp

Cumulative value · 36 months

Illustrative · typical SDR profile

Realization rate

100%

Single role

Full projected productivity value is assumed realizable.

70%

Department / team

30% absorbed by reclassification overhead, QA processes, and change management — consistent with real deployment data.

Task or process

Hours only

When you select a task or process as the scope, the analysis shifts to a productivity diagnosis: instead of economic ROI, the output measures hours recovered per week. The work spans multiple roles and cannot be tied to a single salary baseline, so no payback period or NPV is calculated.

Demo

See the kind of report you get back

The walkthrough shows the actual output format: score, task evidence, financial assumptions, risks, and a practical implementation path. For more inspiration, browse real example roles on the homepage.

Walkthrough of the results page experience.

Get inspired by examples

AI-Readiness Score

An overall percentage of the work that AI could handle, derived from task-level scores weighted by time. Not a guess — each task is scored independently.

Task breakdown

Every task scored individually. You see which parts of the work AI can handle and which remain distinctly human — and why. The question stops being "can AI do this job?" and starts being "which hours of the day?"

Economics

Hours recovered, capacity value, estimated savings, implementation cost, ROI, and payback period. Built on a non-linear adoption curve over 36 months. Inputs and assumptions are visible in the result.

Implementation plan

A phased timeline with concrete steps: discovery and design, supervised pilot, full deployment. It tells you what to do in what order.

Regulatory readiness

GDPR obligations, EU AI Act requirements, and sector- or country-specific rules that apply to the proposed automation.

09 / References

Based on traceable sources, with explicit assumptions

Every model assumption, adoption curve, and regulatory flag traces back to something specific — consultancy studies, peer-reviewed papers, real deployment data, regulation, and honest reporting on where AI still falls short. These are the sources that inform the tool.

Papers4 sources

Academic papers and research reviews

These sources go deeper than market commentary: they look at specific tasks, controlled studies, and domain risks. They help the analysis distinguish work that can be safely augmented from decisions that still need human judgment, especially in recruiting, healthcare, software, and regulated document workflows.

Run a free analysis on any role or process

Describe a role, a department, or a set of tasks. Get your automation score, ROI projection, and payback period in under 5 minutes. No signup required.