From “smart features” to supercomputer forecasts: AI in everyday life, AI in weather, and where ChatGPT fits
- Jan 6
- 4 min read
Most people use AI every day without calling it “AI. In weather, it’s similar: some of the biggest changes are happening behind the scenes, helping forecasters handle more data faster—while the laws of physics (and uncertainty) still call the shots.
To help you see the full picture, this guide begins by outlining AI categories you already use, then moves on to a short history of AI in weather forecasting, explains what ChatGPT uses for forecast assistance, and closes with a look at cognitive science and its significance in the late 1980s.

1) Categories of AI people use day-to-day (often without knowing)
Machine Learning (ML)
Post-processing/calibration: correcting model bias (temps, wind, rain probability)
Downscaling: making coarse model output more local
Statistical guidance: “chance of >50 mm”, “chance of damaging wind”, etc.
Think: learn from past models vs observed outcomes to improve today’s guidance.
Deep Learning (DL)
AI forecast models: DL systems that predict future weather fields directly (fast global guidance)
Nowcasting: short-term radar/satellite-based rain/storm prediction
Satellite/radar interpretation: DL vision models to detect storm structures, cloud types, etc.
Probabilistic AI: DL models producing ensembles/uncertainty (AI-style “ensemble” outputs)
DL is basically ML with large neural networks, and it’s where the big recent leap has happened.
NLP (Natural Language Processing)
Writing or summarising forecast discussions (human-reviewed)
Translating technical outputs into plain language
Extracting info from text reports (storm reports, incident logs)
So NLP helps people understand and act on forecasts, rather than creating the forecast physics.
2) A short history of AI in weather forecasting
Stage 1 — before “AI”: numerical weather prediction (NWP)
Early weather forecasting improvements came from better observations and better maths/physics.
A key milestone was the first computer-generated forecast in 1950, run on ENIAC, showing that computer-based numerical prediction was feasible.
Stage 2 — “AI before it was trendy”: statistical post-processing (1960s–1970s)
Before deep learning, forecasting centres used statistical methods to correct model biases and make outputs more locally useful.
MOS (Model Output Statistics) is a method that uses statistical relationships between model output and real observations to improve practical forecasts. Developed in the US starting in the late 1960s, MOS remains widely cited.
This blend of physics models and statistics has long been standard in forecasting.
Stage 3 — today: deep learning forecast models and hybrid systems (2020s)
Since 2022, deep learning weather models have used reanalysis data to deliver fast, global guidance.
GraphCast (DeepMind) demonstrated strong medium-range skill and drew significant attention (a paper in Science).
GenCast introduced a probabilistic ML approach (think “AI ensemble”) and published it in Nature.
And the biggest “this is real now” sign: agencies began running AI-guidance operations.
ECMWF: its AIFS (Artificial Intelligence Forecasting System) became operational on 25 Feb 2025, with the ensemble version operational on 1 Jul 2025, running alongside the traditional physics-based IFS.
NOAA announced operational deployment of AI-driven global models in December 2025 (AIGFS, AIGEFS, and a hybrid ensemble approach).
The plain-English takeaway
Modern forecasting is increasingly hybrid:
Physics-based models are essential for observational consistency and physical realism.
AI adds speed, additional guidance, and in some cases improved skill.
Forecasters use ensembles/probabilities to manage uncertainty.
3) What ChatGPT uses for weather forecast assistance and commentary
What it actually “uses”
1) Your inputs (most important)
ChatGPT is best when you provide model maps/screenshots, rainfall ranges, and synoptic notes.
key synoptic notes (“trough deepening”, “moisture plume”, “wind shift line”)
the structure/tone you want (region-by-region, “alert not alarm”, Aussie spelling)
It then helps with:
summarising, comparing model runs, turning notes into clean commentary
drafting impact-based wording and safety framing
making consistent templates/checklists
2) General meteorology knowledge (helpful, but generic)
It can explain concepts (ensembles, convergence, “training” storms), but it is limited in that it does not “see” today’s atmosphere unless you supply the evidence.
3) Optional tools (only if enabled/used)
If browsing or data tools are available in your ChatGPT setup, it can:
Look up official warnings / announcements.
work with uploaded data files
run calculations or verification workflows (e.g., scoring model performance)
How reliable is it?
ChatGPT can produce confident-sounding errors (“hallucinations”) and sometimes generate inaccurate or misleading information. OpenAI explicitly recommends verifying important information with reliable sources—especially when accuracy matters.
Safe rule: treat ChatGPT only as a forecast assistant (workflow + communication), not as the authoritative “forecast engine,” because it lacks the ability to independently analyse current weather data.
My interest in AI dates back to the late 1980s, when I had an interest in Cognitive Science.

The “cognitive revolution” (mid-1950s onward)
Cognitive science emerged from a significant shift in psychology and related fields: moving away from explaining behaviour purely as stimulus → response and toward studying what happens inside the mind—mental representations, memory, attention, language, and information processing. A well-known historical summary describes cognitive science as “a child of the 1950s.
Becoming an organised field (mid-1970s onward)
By the mid-to-late 1970s, cognitive science had become more formally established as a discipline. Key milestones include the launch of the journal Cognitive Science (1977) and the creation of the Cognitive Science Society (late 1970s), which helped bring psychology, linguistics, AI/computer science, philosophy, and neuroscience into a shared research community.
Why was it so interesting in the late 1980s (the era you were writing about at Swinburne)
By the late 1980s, cognitive science was in a really active “debate + growth” phase:
Connectionism / neural networks rose sharply in influence. The big idea was that many mental abilities could emerge from networks of many simple processing units (rather than being purely symbolic, rule-based logic). The Parallel Distributed Processing (PDP) work associated with Rumelhart & McClelland was a major driver of this shift.
Cognitive neuroscience accelerated as brain measurement methods improved and became more widely used. This helped researchers link theories of thinking (the “mind level”) with evidence from brain activity (the “brain level”), thereby pulling the field toward more biological and experimentally testable models of cognition.






Comments