AI Literacy + Prompt Design
Understand what these systems are and how to direct them reliably.
Live, founder-led co-creation workshops. Real work, real tools, real artifacts your team owns by the end of the session.
AI investments stall when capability lands without skill. What's missing is judgment — knowing when to reach for AI, how to direct it, and where it breaks.
Generic training teaches buttons. Six months later the buttons have moved. What lasts is the underlying skill: scoping, testing, and designing for the moments the system fails.
Your team already has the domain knowledge — the repetitive tasks, the bottlenecks, the documents nobody wants to write. LiftWork puts the capability in their hands to build the tools themselves.
We map your team's stack, AI maturity, and the three tasks worth automating first. We propose modules, session count, and duration.
Workshops run remote with founder facilitation. Your team builds working tools on the platforms your IT already approves.
The session ends. The artifacts stay with your team. No retainer, no platform lock-in, no support tail.
Six modules. Buy any combination. Sequencing recommended at intake.
Understand what these systems are and how to direct them reliably.
Use AI to build better spreadsheets, faster.
Automate the repetitive tasks your team does every day.
Build a focused AI assistant that owns a specific task reliably.
Build multi-step AI workflows that act autonomously across systems.
Use AI tools to plan, track, and communicate projects more effectively.
Five operational pains, five working tools. Built in the room, owned by the team, running in production.
QA operators at a sterile-fill facility author two to four hours per deviation report — narrative, batch context, CAPA cross-references. Regulator-sensitive, pattern-heavy, slow.
First-draft time down from roughly three hours to forty-five minutes across a four-week pilot. No approval rejections attributable to draft quality. Annualised, around 1,200 operator hours reclaimed for investigation work instead of authoring.
Rotating-equipment technicians at a gas-processing plant write shorthand work-order narratives ("pump seal leak, replaced, restarted"). Reliability engineers need these coded to ISO 14224 failure modes to do anything useful with them. Manual coding runs six months behind, always.
Backlog cleared in three weeks. Coding latency down from six months to under 48 hours. Downstream reliability dashboards reflect current state for the first time in years.
A mid-market commercial underwriting team receives around eighty submissions a week — ACORD forms, loss runs, broker emails, schedules in Excel. The analyst spends most of the morning just normalising the data before underwriting can begin.
Submission-to-triage time down from three days to four hours. Broker win-rate on the small-submission segment up meaningfully because speed matters in that segment. Underwriting authority unchanged.
Relationship managers spend day one of every credit memo pulling ratios, summarising public filings, and reformatting last year's memo — before reaching the actual credit judgement.
Day one of memo work now opens on the judgement, not the formatting. Source-tagged evidence means credit committee reviewers can trace every figure back to its origin in one click.
Freight forwarders ingest commercial invoices, packing lists, and certificates of origin in a dozen formats and languages. The structured fields needed for customs clearance — HS codes, origin, incoterms, values — sit buried in unstructured text.
Paperwork becomes an arrival-day task instead of a post-arrival delay. Clearance latency moves off the critical path for the operations team.
“My ops team came in skeptical and walked out with three working assistants by the end of the week. Two are still in production six months later.”
“Leonardo doesn't sell tools — he teaches the judgment behind them. That is what makes it stick after he's gone.”
“The most useful two days we've spent on AI all year. Half the slides, ten times the output.”
The quick answers. Anything not here, we cover on the call.
Modular sessions of 2–4 hours each. Count and sequencing are decided at intake based on your team's AI maturity and the tasks you want to automate — not a fixed curriculum.
Per engagement, quoted after intake. You see the full scope and price before anything starts. No retainers, no per-seat fees, no surprise add-ons.
No. We build on the tools your IT has already approved — Microsoft 365, Google Workspace, OpenAI, Anthropic, or your existing stack. The goal is tools your team can keep running without us.
Then we work inside the AI your enterprise has already cleared — Copilot, Gemini, or a self-hosted model. The judgment skills — scoping, prompting, testing, governing — transfer across platforms.
Leonardo Dentone, founder, always. No subcontractors, no associate facilitators. Same person through intake, sessions, and hand-off.
Working AI tools your team built on your real tasks, hosted on your own accounts. Plus the mental models to spot the next automatable task without us. No support retainer. No platform lock-in.
A 30-minute discovery call before either of us commits to anything.