_d
devops/ness
Blog
Reading ListAbout

Blog

Practical articles on AI, DevOps, Cloud, Linux, and infrastructure engineering.

Tag: #llmClear filters
Deep Dive: Prompt Versioning and Regression Testing
••May 24, 2024

Deep Dive: Prompt Versioning and Regression Testing

Prompt Versioning and Regression Testing. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Deep Dive: LLM Gateway Design for Multi-Provider Inference
••May 20, 2024

Deep Dive: LLM Gateway Design for Multi-Provider Inference

LLM Gateway Design for Multi-Provider Inference. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Practical Guide: AI Inference Cost Optimization
••April 8, 2024

Practical Guide: AI Inference Cost Optimization

AI Inference Cost Optimization. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Practical Guide: RAG Retrieval Quality Evaluation
••February 21, 2024

Practical Guide: RAG Retrieval Quality Evaluation

RAG Retrieval Quality Evaluation. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Practical Guide: Prompt Versioning and Regression Testing
••February 17, 2024

Practical Guide: Prompt Versioning and Regression Testing

Prompt Versioning and Regression Testing. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Practical Guide: LLM Gateway Design for Multi-Provider Inference
••February 13, 2024

Practical Guide: LLM Gateway Design for Multi-Provider Inference

LLM Gateway Design for Multi-Provider Inference. Practical guidance for reliable, scalable platform operations.

KU
Kiril Urbonas
Read article
Fine-tuning Large Language Models: A Practical Guide
••February 12, 2024

Fine-tuning Large Language Models: A Practical Guide

Learn how to fine-tune LLMs like Llama 2, Mistral, and GPT models for your specific use case. Includes LoRA, QLoRA, and full fine-tuning techniques.

KU
Kiril Urbonas
Read article
Orchestrating AI Agents on Kubernetes
••January 15, 2024

Orchestrating AI Agents on Kubernetes

A deep dive into managing stateful LLM workloads, scaling inference endpoints, and optimizing GPU utilization in a cloud-native environment.

KU
Kiril Urbonas
Read article
Fine-tuning Llama 3 on Consumer Hardware
••January 1, 2024

Fine-tuning Llama 3 on Consumer Hardware

Optimization techniques like LoRA and 4-bit quantization to run state-of-the-art models locally.

KU
Kiril Urbonas
Read article
Previous
1...34