Position : AI Engineer End Client : DTE Energy Location : Detroit, MI hybrid on site 3 days a week- Locals Duration : 12+ months PROJECT DESCRIPTION : QA Automation project : The current QA process involves manually assessing 16 customer experience questions for each call, which is time-consuming and inconsistent. As part of our transformation initiative, we are building an LLM-based automation system to : Analyze call transcripts. Automatically score QA forms based on predefined criteria. Ensure consistency, scalability, and efficiency across all evaluations. Efficient prompt engineering and optimization techniques Key Responsibilities : Design and implement LLM-driven models to automate QA scoring across customer calls. Collaborate with product and QA teams to define scoring logic and evaluation metrics. Build scalable pipelines to process and analyze large volumes of call data. Design and implement LLM-based models to summarize call transcripts and generate structured CR notes. Develop and optimize prompts for accurate and context-aware summarization. Build and deploy scalable pipelines for real-time and batch processing of call data. Architect and develop a chatbot interface for end users to query and interact with summarized content. Highly skilled and experienced in LLM model deployment. REQUIREMENTS : AI Engineer Requirements Education & Background Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, Machine Learning, or a related field 3 5+ years of hands-on experience in machine learning, NLP, or AI-driven application development Technical Skills Strong programming skills in Python (especially with libraries like PyTorch, TensorFlow, LangChain, Hugging Face Transformers, or OpenAI SDKs) Experience with LLM fine-tuning, prompt engineering, and evaluation techniques Expertise in data processing pipelines (e.g., Pandas, Spark, or other ETL frameworks) Familiarity with vector databases (such as FAISS, Pinecone, Chroma, or Weaviate) for context retrieval Experience building and deploying LLM applications using frameworks like LangChain , LlamaIndex , or RAG (Retrieval-Augmented Generation) Strong understanding of API integration and microservice architectures Knowledge of cloud platforms (AWS, Azure, or Google Cloud Platform) for scalable AI model deployment Experience with MLOps tools (Docker, Kubernetes, MLflow, Vertex AI, SageMaker, etc.) Familiarity with QA automation tools or contact center data is a plus Soft Skills Strong problem-solving and analytical abilities Ability to collaborate effectively with product, QA, and data teams Excellent communication skills and a focus on scalability and production-readiness
Ai Engineer • IL, United States