Joseph Pulayan

Joseph Pulayan

Metro Manila, Philippines
AI Software Developer \ Generative AI Developer

About

AI Software Developer specializing in Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and agentic workflows. Experienced in designing and deploying AI-powered agents and chatbots that integrate with business systems to automate processes and enhance user interactions.

Skilled in OpenAI, Anthropic, Gemini, and LangChain, with a focus on prompt engineering, intelligent orchestration, and RAG-enhanced solutions. Passionate about advancing Generative AI applications that deliver measurable business value.

Graduate from University of Santo Tomas with a BS in Computer Science, specializing in Machine Learning, Natural Language Processing, and Empathic Computing. Currently working at Journey Better Business Group Inc., developing enterprise-grade AI solutions.

Experience

AI Software Developer
Journey Better Business Group Inc.
2024
Software Developer Intern
Calibr8 Systems Inc.
2024
BS Computer Science
University of Santo Tomas
2024

Tech Stack View All

LLMs & Generative AI

OpenAI Anthropic Gemini LiteLLM RAG LangChain LlamaIndex

AI/ML Frameworks

TensorFlow Keras NumPy Pandas OpenCV MediaPipe

Programming & Development

Python JavaScript React FastAPI PostgreSQL Docker

Beyond Coding

When not coding AI solutions, I focus on learning about emerging technologies in machine learning, natural language processing, and staying updated with the latest developments in generative AI.

I enjoy exploring new AI research papers, experimenting with different LLM architectures, and contributing to open-source projects in the AI/ML community.

Projects View All

AI Chatbot Agents
Advanced conversational AI systems combining Large Language Models and RAG for intelligent business automation
Private Project
Personal AI Assistant
Interactive AI chatbot embedded in this portfolio website - ask questions about my experience and skills
Available on this site
Procurement AI Suite
Automated procurement proposal system that streamlines tender creation and submission processes for enterprise contracts
Private Project
InvoxAI
Enterprise-grade AI system for automated invoice processing, intelligent vendor validation, and real-time fraud detection
Private Project

Recent Blogs

Fine-tuning vs Prompt Engineering: When to Use Each

Understanding the trade-offs between fine-tuning models and optimizing prompts for specific use cases...

Oct 2024 6 min read

Fine-tuning vs Prompt Engineering: When to Use Each

October 2024 6 min read

As AI systems become more prevalent in enterprise applications, developers face a crucial decision: should you fine-tune a model or optimize your prompts? Both approaches have their place, but understanding when to use each can save you time, resources, and deliver better results.

Understanding the Fundamentals

Prompt Engineering involves crafting and optimizing the input text to guide a pre-trained model toward desired outputs. It's like learning to communicate effectively with an expert consultant.

Fine-tuning involves training a pre-trained model on your specific dataset to adapt its behavior for particular tasks. Think of it as providing specialized training to that consultant.

When to Use Prompt Engineering

  • Quick iterations: When you need to test ideas rapidly
  • Limited data: When you have fewer than 1,000 quality examples
  • General tasks: For common use cases like summarization, translation, or Q&A
  • Cost constraints: When computational resources are limited
  • Experimentation: During the early phases of AI implementation

When to Consider Fine-tuning

  • Domain-specific tasks: When working with specialized terminology or workflows
  • Consistent performance: When you need predictable, repeatable outputs
  • Large datasets: When you have 10,000+ high-quality training examples
  • Latency requirements: When response time is critical
  • Privacy concerns: When data cannot leave your infrastructure

Hybrid Approaches: The Best of Both Worlds

In practice, many successful AI implementations use both techniques:

  1. Start with prompt engineering to validate your use case and gather data
  2. Collect user interactions and feedback to build a training dataset
  3. Fine-tune when you have sufficient data and proven ROI
  4. Continue prompt optimization even after fine-tuning for edge cases

Making the Decision

Consider these factors when choosing your approach:

Budget: Prompt engineering has lower upfront costs but may have higher inference costs. Fine-tuning requires initial investment but can reduce long-term costs.

Timeline: Prompt engineering can be implemented in days, while fine-tuning typically takes weeks to months.

Team expertise: Prompt engineering requires strong communication skills and domain knowledge. Fine-tuning requires ML engineering capabilities.

Conclusion

Both fine-tuning and prompt engineering are valuable tools in the AI developer's toolkit. Start with prompt engineering for rapid prototyping and validation, then graduate to fine-tuning when you have the data, resources, and proven business case. Remember, the best solution often combines both approaches strategically.

The key is to remain flexible and data-driven in your approach, always measuring performance and ROI to guide your decisions.