Problem
LinkedIn offers a valuable feature that allows users to create and download resumes directly from their profiles, effectively eliminating the challenges associated with resume formatting. However, students, being inexperienced, often struggle to craft high-quality resumes. As one of the career mentors, I find myself reviewing over 200 student resumes in an iterative process. Unfortunately, due to the sheer volume, my colleagues often overlook the quality of these resumes, allowing students to indiscriminately send out subpar or error-ridden resumes to potential employers.
This practice has resulted in a decreased employment rate and has negatively impacted the reputation of our course.
Furthermore, career mentors need to review/check each resume, analysis student’s profile, provide feedback to student and refer them to different types of job role.
Solution
We request students to upload their LinkedIn Resume PDFs to our Learning Management System (LMS) – Moodle as a part of their assignment. We frequently review these resumes using Azure OpenAI ChatGPT-4o.
In this post, I won’t delve into the specifics of data preprocessing, but here are the key steps involved:
- Unzip the submitted resumes.
- Rename the folder to the respective student’s name, ensuring there are no duplicates.
- Transform each page of the LinkedIn PDF resume into a PNG format.
AI Resume Reviewer
AI career master’s system prompt, and it shows the function for AI Resume Reviewer.
As a dedicated career guide, your responsibility is to meticulously examine student resumes and provide feedback in Markdown format. Here are the detailed instructions:
|
Group resumes images for each student
import os
from collections import defaultdict
# Define the path to the "data" folder
data_folder = "data"
cv_images = []
# Traverse through each subfolder inside the "data" folder
for root, dirs, files in os.walk(data_folder):
# Iterate over each file in the current subfolder
for file in files:
# Check if the file has a PNG extension
if file.endswith(".png"):
# Print the file path
# print(os.path.join(root, file))
cv_images.append(os.path.join(root, file))
# Group cv_images by folder
cv_images_by_folder = defaultdict(list)
for image_path in cv_images:
folder = os.path.dirname(image_path)
cv_images_by_folder[folder].append(image_path)
Prepare the chat prompts
import base64
# Function to encode an image file as a base64 string
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# Function to create messages for the AI model
def create_messages(base64_images):
return [
{"role": "system", "content": system_prompt},
{"role": "user", "content": [
{"type": "text", "text": "Describe the images as an alternative text, provide feedback, warning if any and ratiing on the resume."},
*[
{"type": "image_url", "image_url": {"url": f"data:image/png;base64,{img}"}}
for img in base64_images
]
]}
]
AI Review and saves the result for each student
from tqdm import tqdm
import os
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
openai_api_version=os.getenv("AZURE_OPENAI_GPT4O_API_VERSION"),
azure_deployment=os.getenv("AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME"),
temperature=0,
)
# Sort the cv_images_by_folder dictionary by folder
sorted_cv_images_by_folder = dict(sorted(cv_images_by_folder.items(), key=lambda x: x[0]))
for folder, images in tqdm(sorted_cv_images_by_folder.items(), desc="Processing folders"):
save_path = os.path.join(folder, 'chatgpt_result.md')
if os.path.exists(save_path):
continue
encode_images = [encode_image(image) for image in images]
messages = create_messages(encode_images)
ai_message = llm.invoke(messages)
# print(ai_message.content)
# Save ai_message.content to a file
with open(save_path, 'w') as file:
file.write(ai_message.content)
Masked sample results 1
### Alternative Text Description The image is a resume for XXXXXXXX. The resume is divided into three main sections: Contact, Experience, and Education.
**Contact Section:** - Address: <deleted> - Mobile: <deleted> - Email: <deleted> - LinkedIn: www.linkedin.com/in/<deleted>
**Experience Section:** - DFI Retail Group - Position: Casual Sales Assistant - Duration: August 2023 - Present (11 months) - Location: Hong Kong, Hong Kong SAR
**Education Section:** - Hong Kong Institute of Information Technology (HKIIT) at IVE (Lee Wai Lee) - Course: Higher Diploma in Cloud and Data Centre Administration, Cloud Computing - Duration: 2023 - 2025
### Feedback
#### Contact Details - **Email:** Present - **Phone Number:** Present - **LinkedIn Profile URL:** Present
#### URLs - www.linkedin.com/in/<deleted>
#### Technologies Mentioned - None
#### Skills Highlighted - None
#### Certifications Acquired - None
#### Educational Qualifications - Higher Diploma in Cloud and Data Centre Administration, Cloud Computing (2023 - 2025)
#### Professional Experiences - Casual Sales Assistant at DFI Retail Group (August 2023 - Present, 11 months)
### Alerts and Warnings 1. **Missing Technologies and Skills:** The resume does not mention any specific technologies or skills. 2. **Missing Certifications:** No certifications are listed. 3. **Profile Section:** The profile section is missing the GitHub URL. 4. **Course Information:** The course name and institution are correctly mentioned. 5. **Spelling and Grammar:** No spelling mistakes detected. 6. **Summary Section:** The summary section is devoid of pronouns. 7. **Tense Consistency:** The tenses are consistent throughout the resume.
### Suggested Job Title - Entry-Level Cloud and Data Centre Technician
### Resume Rating **Rating: 4/10**
### Justification The resume contains the essential contact details and educational qualifications, which are crucial for any job application. However, it lacks specific technologies, skills, and certifications that are vital for a career in cloud and data centre administration. The absence of a GitHub URL in the profile section is also a significant omission. These missing elements reduce the likelihood of securing a job in the desired field.
### Suggested Job Based on the current content of the resume, the student is likely to land an entry-level position such as a Cloud and Data Centre Technician. To improve the chances of securing a job, it is recommended to include relevant technologies, skills, and certifications.
|
Masked sample results 2
### Alternative Text Description The image is a resume for Kelvin Yiu, an XYZ Cloud Club Captain from New Territories, Hong Kong SAR. The resume is divided into several sections: Contact, Top Skills, Languages, Certifications, Summary, Experience, and Education.
#### Contact - **Mobile:** <deleted> - **Email:** <deleted> - **LinkedIn:** www.linkedin.com/in/<deleted> - **GitHub:** github.com/<deleted>
#### Top Skills - ______________ Services (XYZ) - Terraform - Kubernetes
#### Languages - Cantonese (Native or Bilingual) - Mandarin (Professional Working) - English (Professional Working)
#### Certifications - XYZ Certified Solutions Architect – Associate - XYZ Academy Graduate - XYZ Academy Cloud Foundations
#### Summary A tech enthusiast with at least 3 years of hands-on experience in developing with Python and Golang, working on several cloud projects. Has a cybersecurity background and led a team to participate in numerous public cybersecurity competitions in Hong Kong during high school studies.
#### Experience **Amazon Web Services (XYZ)** - **Role:** Cloud Captain - **Duration:** March 2024 - Present (3 months) - **Location:** Hong Kong SAR - **Responsibilities:** - Started the first XYZ Cloud Club in Hong Kong. - Planned events to teach about clouds and prepare people for jobs in cloud technology. - Helped students join XYZ Cloud Clubs to build a cloud community. - Led the growth of the Hong Kong Regional Cloud Club.
#### Education **Hong Kong Institute of Information Technology (HKIIT) at IVE (Lee Wai Lee)** - **Course:** Higher Diploma in Cloud and Data Centre Administration, Cloud Computing - **Duration:** September 2023 - September 2025
### Feedback and Warnings 1. **Contact Details:** - **Email:** <deleted> - **Mobile Number:** <deleted> - **LinkedIn Profile URL:** www.linkedin.com/in/<deleted>
2. **URLs Present:** - www.linkedin.com/in/<deleted> - github.com/<deleted>
3. **Technologies Mentioned:** - Amazon Web Services (XYZ) - Terraform - Kubernetes - Python - Golang
4. **Skills Highlighted:** - Amazon Web Services (XYZ) - Terraform - Kubernetes
5. **Certifications Acquired:** - XYZ Certified Solutions Architect – Associate - XYZ Academy Graduate - XYZ Academy Cloud Foundations
6. **Educational Qualifications:** - Higher Diploma in Cloud and Data Centre Administration, Cloud Computing (September 2023 - September 2025)
7. **Professional Experiences:** - XXXXXXXX Services (XYZ), Cloud Captain (March 2024 - Present, 3 months)
### Alerts 1. **Profile Section:** - Missing GitHub URL in the profile section. 2. **Summary Section:** - Contains the pronoun "I" which should be avoided. - Spelling mistake: "I have" should be "I have". 3. **Course Information:** - Correct course information is present.
### Resume Rating **Rating: 4/10**
### Justification The resume contains essential contact details, educational qualifications, and professional experiences. However, it has several issues: - The summary section contains a pronoun and a spelling mistake. - The GitHub URL is missing from the profile section. - The professional experience is relatively short (3 months).
These issues reduce the overall quality and effectiveness of the resume, making it less likely to secure a job.
### Suggested Job Title - Cloud Engineer - Data Centre Technician
### Likely Job Based on the resume content, the student is likely to land a job as a Cloud Engineer or Data Centre Technician.
|
AI Resume Extractor
It retrieves all the review outcomes and exports them to a Microsoft Excel file. This process involves a function calling that guarantees the data is returned in the correct format and mapped to a structured record.
Get all AI reviews
import os
# Define the path to the "data" folder
data_folder = "data"
chatgpt_results = []
# Traverse through each subfolder inside the "data" folder
for root, dirs, files in os.walk(data_folder):
# Iterate over each file in the current subfolder
for file in files:
if file == "chatgpt_result.md":
# Print the file path
chatgpt_results.append(os.path.join(root, file))
chatgpt_results.sort()
Setup the function calling with LangChain and Pydantic.
from langchain_core.utils.function_calling import convert_to_openai_function
from typing import List, Optional
from langchain.pydantic_v1 import BaseModel, Field
class StudentCvRecord(BaseModel):
"""Call this to save a student CV record in markdown format."""
name: str = Field(description="Name of the student")
email: Optional[str] = Field(description="Email address")
mobile_number: Optional[str] = Field(description="Contact number")
linkedin_profile_url: str = Field(description="LinkedIn profile url")
resume_rating: int = Field(
description="Rating of the resume between 1 to 10")
rationale: str = Field(description="Rationale for the rating")
warning: str = Field(description="Any warning message")
feedback: str = Field(description="Feedback message")
proposed_job_titles: List[str] = Field(description="Proposed job titles")
certifications: List[str] = Field(description="List of certifications")
technologies: List[str] = Field(description="List of technologies")
skills: List[str] = Field(description="List of skills")
work_experience: List[str] = Field(description="List of work experiences")
student_cv_record_function = convert_to_openai_function(StudentCvRecord)
Extract the result for each student, and fallback to GPT-4o if GPT-35-turbo cannot handle the JSON encode.
import json
from tqdm import tqdm
student_records = []
for result_path in tqdm(chatgpt_results):
result_path_json = result_path.replace(".md", ".json")
if os.path.exists(result_path_json):
with open(result_path_json, "r") as f:
result_json = f.read()
result = StudentCvRecord.parse_raw(result_json)
student_records.append(result)
continue
with open(result_path, "r") as f:
cv = f.read()
name = result_path.split("/")[-2]
try:
result = chain35.invoke({"cv": cv})
except Exception as e:
result = chain4o.invoke({"cv": cv})
result.name = name
result_json = json.dumps(result.dict())
with open(result_path_json, "w") as f:
f.write(result_json)
student_records.append(result)
Microsoft Excel Report
Now, we can mail merge the result to students and let them fix their resumes.
How to use it?
- Fork https://github.com/wongcyrus/linkedin-resume-reviewer
- Create a GitHub Code Spaces
- Fill in .env_template and rename it to .env.
- Create data folder and upload zipped PDF resumes in it.
- Modify zip_file_path and run data-preprocessing.ipynb
- Run ai-resume-reviewer.ipynb to use Azure OpenAI ChatGPT4o to review resumes images.
- Run ai-resume-extractor.ipynb to use Azure OpenAI ChatGPT 3.5 Tubo and 4o to extract the reviewer result.
Conclusion
The integration of Azure OpenAI ChatGPT-4o into our resume review process has significantly improved the quality of student resumes. By automating the initial review and feedback process, we ensure that each resume is meticulously examined for errors, missing information, and overall quality. This approach not only saves time for career mentors but also enhances the employability of our students by providing them with high-quality resumes. As a result, we have observed an increase in employment rates and a positive impact on the reputation of our course. This innovative solution demonstrates the potential of AI in transforming educational and career support services.
Enhancing a LinkedIn-generated resume PDF encourages students to maintain an impressive LinkedIn online presence. It’s crucial to uphold a well-crafted LinkedIn profile throughout one’s career.
Project collaborators include, Kelvin Yiu, Karl Chan, and Mandy Lau from the IT114115 Higher Diploma in Cloud and Data Centre Administration and Microsoft Learn Student Ambassadors candidates.
About the Author
Cyrus Wong is the senior lecturer of Hong Kong Institute of Information Technology (HKIIT) at IVE(Lee Wai Lee). and he focuses on teaching public cloud technologies. He is one of the Microsoft Learn for Educators Ambassador and Microsoft Azure AI MVP from Hong Kong.