no-js web accessible help

Started by kdmoyers, April 29, 2026, 02:12:23 PM

Previous topic - Next topic

kdmoyers

The crashing weight of the llm revolution/tragedy has come to my company too, and it is a shame that the llms seem to have trouble accessing help info for winbatch. 

If you start at https://docs.winbatch.com/ you get the cool left side bar interface, which seems to baffle the llms.  I don't think they do the cool interface thing.

If you start at https://docs.winbatch.com/Contents.htm you can't actually penetrate to the content. there just don't seem to be any links inward.

I've seen https://docs.winbatch.com/mergedProjects/WindowsInterfaceLanguage/html/HTMLWIL_WIL001.htm
mentioned -- is this the best no-js entry point to recommend to the llms?

This icky future is here, and I hate to see us drift away from winbatch simply because the llms don't know it.  It doesn't matter to me, I already know it, but the junior programmers are lazy...

thanks in advance,
Kirby
The mind is everything; What you think, you become.

td

Are you sure that JavaScript is the problem? It could just be the delay before the context sidebar is populated.

The big boys scan this Website and the Techsupport Website constantly, and they seem to grasp the WinBatch basics. I have been having some fun developing my own machine learning system, but have not tackled training the model with WinBatch scripts and documentation yet. Probability and information theory are enough to try to wrap my head around for now...
"No one who sees a peregrine falcon fly can ever forget the beauty and thrill of that flight."
  - Dr. Tom Cade

td

"No one who sees a peregrine falcon fly can ever forget the beauty and thrill of that flight."
  - Dr. Tom Cade

bottomleypotts

LLMs aren't coding because they are reading the manual. They're coding because they have examples. Until there is a tonne of well coded and commented on examples of winbatch, LLMs won't be able to help us.

td

Generally, training machine learning models involves both ingesting documentation and examples. It can be very tedious because coding examples should have an associated prompt. At the very least, the examples should contain quality code comments.
"No one who sees a peregrine falcon fly can ever forget the beauty and thrill of that flight."
  - Dr. Tom Cade

bottomleypotts

It's true for the instruction-tuning / supervised fine-tuning stage.

However, that paired data is not where most of an LLM's coding ability comes from. That's built during pre-training on the massive self-supervised phase where the model simply predicts the next token across trillions of tokens of raw internet sourced data from GitHub repos, Stack Overflow threads, notebooks, etc.

td

Perhaps read my previous post a little more carefully. Also, most programming language documentation contains examples. That is true of WinBatch documentation. And models can be adjusted once created with settings like top-t, top-p, tau, eta, mu, etc. These settings adjust the accuracy and performance. Fun math.

Anthropics Claude AI Chatbot will tell you that training for model updates comes from "publicly indexed places (forums, GitHub, official docs)."
"No one who sees a peregrine falcon fly can ever forget the beauty and thrill of that flight."
  - Dr. Tom Cade

bottomleypotts

I did read your post carefully.

The vast majority of an LLM's coding ability does not come from the handful of paired prompt+example snippets you see in official docs or during supervised fine-tuning. That stage is small.

What actually teaches the model how to code is the pre-training phase — next-token prediction on trillions of raw tokens scraped from the entire internet: GitHub repos, Stack Overflow, notebooks, forums, etc. That's where the model learns patterns, idioms, error handling, best practices, etc. for popular languages.

WinBatch is extremely obscure.

  • It has almost zero presence on GitHub (the "winbatch" topic barely exists and has virtually no stars or activity).
  • Stack Overflow has a handful of ancient questions.
  • The official WinBatch site and tech support database have maybe a couple thousand examples total.

That is a rounding error compared to the scale of pre-training data. A few dozen (or even a few thousand) examples in the docs are nowhere near enough for the model to actually learn the language the way it learned Python, JavaScript, or even PowerShell.

The sampling settings you mentioned (top-p, temperature, top-k, etc.) are inference knobs — they only change how the already-trained model picks the next token. They don't add new knowledge.

And yes, Claude is correct that training data comes from "publicly indexed places." The problem is WinBatch simply isn't there in any meaningful quantity. That's exactly why LLMs suck at it.

So no — the official docs with their limited examples are not going to magically give the model WinBatch superpowers. We'd need orders of magnitude more well-commented, real-world WinBatch code in the wild for that to happen.

spl

Very interesting discussion. But, you know me... I go on tangents. Training LLM to wrote code {correctly} can be a holy grail, but I have become interested in Local LLM to digest schema information from a database [a post with code I made a couple of months ago for making schemas]... and then ask as a 'bot' to answer questions about the data in natural language. I am using "llama3.1" as the model, and the requests are from a local api - "http://localhost:11434/api/generate" - all done via HTTP request. The request could be made from WB via a CLR System.Net.HttpWebRequest, or possibly the COM WinHttp. The issue is streaming the result content so now using PS and the ConvertFrom-Json cmdlet. But as both Tony and BP noted - lot of behind the scenes work required. Designing and training a db to respond to natural questions like "Who had the least leads from our ads last week and why do you think that is?" without an analyst crunching data in Excel... scary, but not difficult.
Stan - formerly stanl [ex-Pundit]

SMF spam blocked by CleanTalk