ย โขย
Medialย โขย 20d
If you are still port forwarding in 2026, you are solving the wrong problem. I just connected Ollama running on my Mac to a Hostinger VPS using Tailscale. Total time: about 90 seconds. - Zero firewall rules. - Zero exposed IPs. - No router changes. My local Llama 3 now answers API calls from a cloud n8n workflow, securely, over a private encrypted mesh. My ISP sees nothing unusual. My router remains untouched. Most Indian founders think they must choose between: Local = private but isolated Cloud = accessible but exposed That tradeoff is outdated. Tailscale assigns each machine a private 100.x IP that behaves like a LAN address, even if one device is in Mumbai and the other is on a VPS in Bangalore. Your Mac and VPS communicate as if they are on the same internal network. The setup is simple. On Mac: - brew install tailscale - sudo tailscale up - tailscale ip -4 - Copy the 100.x IP. On VPS: curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up Now both machines are on the same private mesh. Start Ollama on your Mac: - OLLAMA_HOST=0.0.0.0 ollama serve From the VPS, test the connection: - curl http://100.x.x.x:11434/api/tags - If you receive JSON with your models, the tunnel is live. Inside n8n, set the Ollama base URL to: http://100.x.x.x:11434 That is it. - No nginx. - No SSL certificates. - No static IP requests. - No reverse proxy subscriptions. For Indian SaaS founders, this changes the cost equation completely. Run Mixtral or Llama on your local machine at zero monthly compute cost. Let your VPS handle public webhooks and queues on a โน399 plan. You get production automation backed by local GPU power, without paying enterprise cloud rates. The competitive edge is not the model. It is the architecture. You keep: โข Local compute โข Cloud automation โข Zero exposed ports โข Encrypted private networking The technical barrier is almost zero. The real question is not whether this works. It is whether you are willing to keep your stack simple enough to win. Subscribe on Youtube: https://www.youtube.com/@GeeksGrow?sub_confirmation=1 Follow on Instagram: https://www.instagram.com/varun_bhambhani/
ย โขย
Medialย โขย 1y
๐ Running Deepseek R1 1.5B on Ollama โ AI at My Fingertips! ๐ค๐ฅ Just set up and tested the Deepseek R1 1.5B model using Ollama, and Iโm impressed with how seamless the experience is! This model is an efficient and capable LLM, and running it local
See More
garoono.in minimal a...ย โขย 3m
Corporate be like: โLetโs build a full iOS app in Flutterโฆ without a real Macโฆ just use the VDI.โ And Iโm sitting here like: Bro, Xcode on a VDI is basically lag, crashes, and delayed deadlines in 4K. A MacBook/Mac mini isnโt a flex. Itโs literally
See Moreย โขย
Medialย โขย 19d
Your AI coding assistant should never phone home. I just cut the cord completely and it's 10x faster. Everyone's arguing about which cloud AI is better. They're missing the point: the cloud is the bottleneck. I ran Claude Code locally this morning u
See More-Owner of PREM BARUA...ย โขย 1y
I am excited to present our company, Prem Barua Private Limited, a pioneering construction hardware manufacturing startup in Tripura. With a focus on producing black wire, aluminum, binding wires, and iron nails, we aim to address the region's relian
See MoreFounder Snippetz Lab...ย โขย 7m
I have been building apps with the help of AI agents but thereโs always been one major flaw: No real security. No tamper protection. No local encryption. No defense against rooted or compromised devices. So, we built Novo โ a fully offline, ultra-s
See More
Download the medial app to read full posts, comements and news.