Back to Projects
Qwen2.5 14B Job Parsing Model

Qwen2.5 14B Job Parsing Model

LLM
Fine-tuning
NLP

Fine-tuning and using the Qwen2.5 14B model for job description parsing tasks

Project Overview

The project uses the Unsloth framework for optimized training and inference. Fine-tuning is performed using LoRA (Low-Rank Adaptation) for efficient parameter updates. The model is based on Qwen2.5 14B architecture.

Key Features

  • Job description parsing
  • Efficient fine-tuning
  • Optimized inference

Technical Details

Fine-tuning performed using LoRA for efficient parameter updates. Based on Qwen2.5 14B architecture.

Challenges & Solutions

Optimizing for large model inference on limited resources

Project Details

2024

Technologies Used

Qwen2.5
Unsloth
LoRA
Python