Is it still worth learning and using Hugging Face models and their ecosystem, or should I pivot to using LangChain for LLM APIs? I feel like the major AI companies are going to dominate the space soon.
This isn’t an either/or situation. You can use both depending on the use case.
They’re essentially the same in terms of value depending on your needs.
Why not use both?
Hugging Face gives you pre-trained models that you can fine-tune, run inference on, or export. LangChain is more about working with models you’ve already set up and creating workflows around them. So the question comes down to whether you want to host your own models or use remote ones via API.
Why not learn both? They serve different purposes, and both have value depending on your goals.
Hugging Face is great for fine-tuning models and more customized AI/ML work. LangChain is useful for building LLM-driven applications quickly but doesn’t offer the same depth of customization. If you need more control, Hugging Face is definitely still relevant.
@Chandler
Thank you!
Just a reminder: you don’t have to use LangChain for working with LLMs. You can absolutely build workflows without it!
You can train your own LLM using pre-trained Hugging Face models and incorporate it into LangChain if you need. If privacy isn’t an issue, enterprise LLM APIs could also be a great option.
LangChain can be useful for prototyping, but in production, it can be problematic due to poor documentation updates. I recommend prototyping with LangChain, then writing your own implementation after reviewing its source code.