Automated Resume Shortlisting (Batch Processor)

Nov 2025
Ayush Kulshreshtha
Ayush Kulshreshtha
HRRECRUITINGAUTOMATION

Ready-made WorqHat template

Launch "Automated Resume Shortlisting (Batch Processor)" as a workflow

Duplicate this recipe inside WorqHat to get the outlined triggers, nodes, and delivery logic preconfigured. Update credentials, recipients, and copy, then ship it to production.

  • • All workflow nodes referenced in this guide
  • • Structured JSON outputs for dashboards and mailers
  • • Inline documentation for faster handoffs

Get started checklist

  1. 1. Duplicate the workflow template.
  2. 2. Connect your datasource and credentials.
  3. 3. Customize content and recipients.
Launch this template

Automated Resume Shortlisting (Batch Processor)

This workflow replaces manual resume screening with an AI-powered batch processor that:

  • Pulls pending applications on a schedule
  • Reads and converts resume files into text
  • Evaluates each candidate against a DevRel job description
  • Updates statuses to Shortlisted or Rejected and notifies the team on strong matches

Recruiters get consistent, fast evaluations instead of grinding through PDFs.

Previous State

New applications arrive → recruiter downloads resumes → skims each one → compares to job description → updates a spreadsheet or ATS → pings the team when someone looks promising.

The result: slow response times, inconsistent decisions, and a lot of time lost context-switching between tools.

Target State

  • A time-based trigger wakes the workflow every 10 minutes
  • It fetches a small batch of Pending candidates from the database
  • Each resume is extracted, scored by AI, and classified as DevRel-fit or not
  • Matching profiles are marked Shortlisted and pushed to Slack/Discord
  • Non-matches are marked Rejected to keep the pipeline clean

The HR team only reviews curated shortlists instead of raw resumes.

Workflow Breakdown

1. Time Based Runs (Cron Trigger)

  • Purpose: Start the batch automatically every 10 minutes.
  • Cron: */10 * * * *
  • Inputs: None – the workflow fetches its own work from the database.

This node is the heart of the batch cadence.

2. Query Data (Fetch Pending Applications)

  • Collection: Applications
  • Filter: status == "Pending"
  • Limit: 5 (to keep batches small and avoid latency / throttling)
  • Output: Up to 5 candidate records, each containing at least name, status, and resume_url.

If no pending records exist, the workflow naturally becomes a no-op for that run.

3. Loop Process (Process Each Candidate)

  • Loop variable: {{nodes.query_data[]}} (array of pending applications).
  • Purpose: Run the same inner steps (extract → evaluate → update → notify) once per candidate within the batch.

Downstream nodes reference the loop variable (e.g., {{loop.currentCandidate.resume_url}}) for candidate-specific data like the resume URL, name, and record ID.

4. Text Extraction (Read Resume File)

  • Input: {{loop.currentItem.resume_url}} (PDF/DOC URL)
  • Purpose: Convert the resume file into clean, machine-readable text.
  • Output: content field containing the raw resume text.

This normalizes different file formats into a single text stream for AI.

5. Text Generation (AI Resume Evaluation)

  • Model: aicon-v4-large
  • Response type: JSON
  • Prompt (simplified):
    • Provide the DevRel job description (community + content + coding, not pure backend).
    • Provide the resume text from the extraction node.
    • Ask the model to decide if the candidate is a DevRel fit and explain why.
  • Expected JSON output:
{ "is_match": true, "reason": "Short summary of why" }
  • Role: This node is the core classifier: it turns unstructured resume text into a structured hiring decision.

6. If-Else Condition (Match vs. No Match)

  • Condition: {{nodes.text_generation.output.is_match}} == true
  • Branches:
    • IF true: Candidate is considered a DevRel match.
    • ELSE: Candidate does not fit the DevRel role.

This single decision point drives the rest of the pipeline.

7A. IF Match → Shortlist + Notify

Update Data (Shortlist)

  • Collection: Applications
  • Selector: _id == {{loop.currentItem._id}}
  • Update fields:
    • status = "Shortlisted"
    • ai_notes = {{nodes.text_generation.output.reason}}

Send Discord / Slack Message (Alert)

  • Channel: Recruiting / hiring channel (via webhook).
  • Message example:
🎯 Shortlisted: {{loop.currentItem.name}} — {{nodes.text_generation.output.reason}}

This ensures strong candidates surface to the team immediately.

7B. ELSE No Match → Reject

Update Data (Reject)

  • Collection: Applications
  • Selector: _id == {{loop.currentItem._id}}
  • Update fields:
    • status = "Rejected"

Purpose: keep the pipeline clean by explicitly closing out mismatched profiles.

8. Return State (Batch Complete)

  • Message: "Batch processing complete."
  • Optional payload: Number of records processed, number shortlisted, number rejected.

This node makes the workflow easy to monitor from logs or higher-level orchestrators.

Conditional Logic & Branching

There is a single key decision:

  • IF is_match == trueShortlist + notify
  • ELSEReject

No nested branching is required. All complexity remains in the AI evaluation prompt and the downstream updates.

Data Flow & Transformations

End-to-end flow:

  1. Database → Query Data: Pulls up to 5 Pending applications.
  2. Query Data → For Loop: Iterates through each candidate as loop.currentItem.
  3. For Loop → Text Extraction: Downloads and extracts the resume file into plain text.
  4. Text Extraction → Text Generation: AI maps text to a DevRel-fit decision + justification JSON.
  5. Text Generation → If-Else: Routes candidates to Shortlist vs. Reject.
  6. If-Else → Update Data: Writes back Shortlisted / Rejected and AI notes.
  7. Update Data → Notifications: Sends alerts for shortlisted profiles.

Main transformation chain: File URL → Text → AI Classification → JSON Decision → Status update.

Integration Requirements

  • AI model: aicon-v4-large for resume understanding and DevRel fit classification.
  • File extraction engine: PDF/DOC ingestion for the Text Extraction node.
  • Slack/Discord: Webhook URL or bot token for real-time notifications.

Authentication:

  • AI API key (configured at the Text Generation node or globally).
  • Webhook secrets / tokens for Slack or Discord.

Rate limiting:

  • Batch size (limit = 5) keeps processing within reasonable time and under API quotas.
  • Adjust the cron interval or limit upward/downward based on your inbound volume.

Outcomes

  • Dramatically reduced manual resume screening for HR and hiring managers
  • Consistent application of the DevRel job criteria across every candidate
  • Faster response times for strong applicants
  • Clear, auditable notes (ai_notes) explaining why someone was shortlisted or rejected
  • Add a “Maybe” bucket for borderline candidates with a separate follow-up workflow.
  • Add an Email node to send automated acknowledgement or rejection messages.
  • Store an AI score (0–100) alongside is_match for more granular ranking.
  • Add an analytics dashboard view over statuses, match rates, and sources.

Next Steps

  • Configure the Applications collection with at least name, status, and resume_url.
  • Wire the Text Extraction and Text Generation nodes with your file store and AI credentials.
  • Plug in Slack/Discord webhooks for shortlist alerts.
  • Turn on the workflow and let it screen new applicants every 10 minutes.

👉 Install this template in WorqHat and let your AI batch-processor handle resume shortlisting.