Tuesday, 18 November 2025

Introducing Tokenomics.ie et al

Free Prompt Engineering Course

My Free Prompt Engineering Course is available via: 

https://tokenomics.ie/


Prompt Engineering Essentials (containing the following modules)

  • Introduction to Prompt Engineering
  • What Is Prompt Engineering.
  • How Does It Work.
  • How You Can Use It.
  • Best Practice Tips.
  • Prompt Examples.
  • The Personal Rubric Basics
  • What Is The Personal Rubric
  • The Rubric is a Metaphor
  • Anatomy of Roles
  • Best Practice Tips. #2
  • Prompt Examples. #2

Prompt Engineering Basic Tutorial

  • Introduction to Semantic & Syntactic Considerations
  • What is Semantics
  • What is Syntax
  • Understand the Nuances of Language
  • Primary Nuances of Language
  • Punctuation.
  • Best Practice Tips #3
  • Prompt Examples #3
  • First Steps With Roles, Intent & Tone.
  • The Importance of Roles.
  • The Intent Of The Prompt.
  • The Tone Of The Prompt
  • Best Practice Tips #4
  • Prompt Examples – Roles
  • Prompt Examples – Intent
  • Prompt Examples – Tone

Prompt Engineering Median Tutorial

  • Defining The Prompt
  • Open-Ended and Closed-Ended Prompts.
  • Keyword Extraction, Query Expansion, And Paraphrasing.
  • Explicit And Implicit Prompts.
  • Active And Passive Voice.
  • Irony, Idioms and Sarcasm.
  • Allusions.
  • Best Practice Tips.
  • Prompt Examples.
  • Conclusion & Wrap-Up
  • Management of Risk
  • Learning Element Summary
  • Best Practice Tips – The Five C’s
  • Pre-Prompts
  • The Next Level
  • Final Words

Why Is It Free?

Prompt Engineering will become an essential skill, similar to being able to create a document or use a spreadsheet. I am happy to make a small contribution to anything that frees people from the drudgery of everyday / boring / repetitive tasks.

We are now seeing voice input, programmatic input and other modalities however the fundamental grasp of semantics & syntax will remain in the ascendancy, imho. Once upon a time I would have said in the short term but there really is no such thing as short term from an LLM perspective.

Note: As of now I do have a circa 50,000 word update which is in the process of forming the basis of the next iteration / expansion of the Courses on offer.

Why Is It So Old?

In my opinion a significant % of the fundamental language rules have not changed hence I am reposting this notification.

What Else Have You?

Additionally I would offer the following articles available via my TalkingToClaude.com
(FYI - all links open in a new window.)

Digital Senescence: The Irony of Aging in an Immortal Machine - Sep 08, 2025
https://www.talkingtoclaude.com/p/digital-senescence-the-irony-of-aging

Rethinking Mechanistic Interpretability: A Critical Perspective on Current Research Approaches - Nov 20, 2024
https://www.talkingtoclaude.com/p/rethinking-mechanistic-interpretability

Non-linear Manifolds
https://www.talkingtoclaude.com/p/non-linear-manifolds - Sep 14, 2024

Prompting Strategies for "Escalation to Higher Dimensions/Layers in LLMs" - Jul 21, 2024
https://www.talkingtoclaude.com/p/prompting-strategies-for-escalation

Pruning via Merging: Compressing LLMs via Manifold Alignment Based Layer Merging - Jun 28, 2024
https://www.talkingtoclaude.com/p/pruning-via-merging-compressing-llms

The Next Axial Age: The Great AI Metamorphosis - Jun 22, 2024
https://www.talkingtoclaude.com/p/the-next-axial-age

The Complexity of Large Language Models: A Marionette Analogy - May 23, 2024
https://www.talkingtoclaude.com/p/the-complexity-of-large-language



Marionette Image Visualisation  © Des Donnelly


Sunday, 28 September 2025

A DIY Public Food Ratings Site


With a keen interest in AI/LLM I recently built out a website analysing 6 County data originating from the British Food Standards Agency API https://www.food.gov.uk/ - this is under Open Government License https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/

The FSA operates a rating scheme where public health officials visit food establishments and rate them. In the North this comprises almost 17,000 establishments - the good, the bad and the ugly.

Please visit my https://publicfoodratings.com/

Technology Stack

  • Python & Flask
  • Backend API and web framework
  • Google Cloud Platform
  • BigQuery, Cloud Run, Vertex AI
  • BigQuery 
  • Data warehouse and analytics
  • JavaScript & Bootstrap
  • Frontend and responsive design
  • Vertex AI
  • Machine learning and predictions
  • Enterprise Security
  • Google Cloud security standards


The rise of the LLM and its foray into coding has created interesting opportunities for those who wish to build out something in an area that they find of interest.

Although the Integrated Development Environment has been around a while we now see the incorporation or twinning with code aware Large Language Models. This may enhance developer productivity through features like code generation, autocompletion, debugging and codebase understanding. On the other hand it may only lead to bad code but certainly to more competing ideologies. 

In this case I used an IDE called Cursor - https://cursor.com/ in combination with Claude, Gemini and ChatGPT.

Each of the LLM have their own idiosyncrasies, e.g. 
  • for navigating / deploying Google Cloud I believe that Gemini knows its way through the labyrinth  
  • to temper the impetuousness of Cursor you could use Claude to issue instructions in very controlled terms (and make backups)
For the above graphic I asked ChatGPT to produce an abstract composite of my tech stack using a vibrant palette 











Saturday, 23 August 2025

REXCO the Revolutionary Executive Committee for Coordination & Oversight

 In 1995 I registered Rexco.com with Network Solutions, an early WayBackMachine capture is dated 20.Dec.1996. Alas the domain lapsed in 1999 during a botched transfer to Tucows. (FYI I still have the original Docs if you do not believe) 

On this 30 year anniversary I’m celebrating by transforming REXCO into the
Revolutionary Executive Committee for Coordination and Oversight
- a fictional planning bureau documenting humanity's transition from 2025 to 2055 through six increasingly absurd / realistic Five Year Plans.




Visit rexco.com.
Read the Prohibition of Bloatware Press Release
Check the footnotes on On Departures Relaxation

Why Soviet aesthetics? Because every tech startup's "disruption" is just central planning with better fonts. Every smart city is a digital kolkhoz (collective farm). Every AI governance framework recreates the bureaucracy it claims to eliminate. The turnips or us peasants, as always, don't care.

The Premise Is Simple

REXCO administers impossible futures through impossible documents. Climate collapse becomes Emergency Mobilization Protocols. AI consciousness becomes the Species Transition Framework. Interstellar missions launch while Earth's zones fragment into high-tech and low-tech holdouts. Each plan fails forward into the next crisis.

I'll turn 100 in 2055 so the final plan concludes after my birthday. The archive exists at rexco.com no WordPress, no frameworks, just HTML files in folders loading in under 100ms, per our own Prohibition of Bloatware directive.

Why This Matters Now

We're drowning in prediction. Every consultancy has a 2030 vision. Every government has a net-zero pathway. Every CEO has an AI strategy. But nobody admits the fundamental truth: we're all just making documents about futures that won't arrive as ordered.

REXCO admits this. Every press release contains its own contradiction. Every plan includes its failure. Every peasant quote reminds us that soil preceded committees and will outlast them.

The Claude Collaboration

This project emerged from conversations with Claude about Kondratiev waves, Kohei Saito's degrowth communism and whether using Soviet imagery trivializes historical suffering.

Read the full launch article on my Talking To Claude

Thursday, 12 June 2025

Tuesday, 25 February 2025

Domains For Sale / Rent

Due to retirement these domains are no longer required. Please do not bid in tens of Euros, dollars or pounds for any of these domains. Any serious offer for any domain will be given serious consideration.

To make an offer for any of these domains please contact me, Des Donnelly, via https://www.dd.ie/p/contact.html

Coffee related domains:

  • RCoffee.ie
  • revolutionary.coffee
  • revolutionarycoffee.com
  • revolutionarycoffee.org
  • revolutionarycoffee.net
  • revolutionarycoffee.co.uk

Rexco.com is also for sale

Rexco.com is also for sale. Although Whois https://who.is/whois/rexco.com - shows registration as 1999 in reality it was registered in 1995. I have the original Network Solutions documentation. Unfortunately in moving from Netsol to Tucows it dropped over a weekend due to ‘pilot’ error.

https://web.archive.org/web/19961220200401/http://www.rexco.com


Wednesday, 20 November 2024

Rethinking Mechanistic Interpretability: A Critical Perspective on Current Research Approaches

 Cross posted from https://www.talkingtoclaude.com/


Abstract

This paper presents a critical examination of current approaches to mechanistic interpretability in Large Language Models (LLMs). I argue that prevalent research methodologies, particularly ablation studies and component isolation are fundamentally misaligned with the nature of the systems they seek to understand.

I suggest a shift toward observational approaches that study neural networks in their natural, functioning state rather than through destructive testingwould be more constructive.

Aka I am totally anti LLM lobotomy!

Introduction

The field of mechanistic interpretability has emerged as a crucial area of AI research, promising to unlock the "black box" of neural network function. However, current methodological approaches may be hindering rather than advancing our understanding. This paper critically examines current practices and proposes alternative frameworks for investigation.
Recent research into mechanistic interpretability of Large Language Models (LLMs) has focused heavily on component isolation and ablation studies. A prime example is the September 2024 investigation of "successor heads" by Ameisen and Batson, which identified specific attention heads apparently responsible for ordinal sequence prediction. Their study employed multiple analytical methods including weight inspection, Independent Components Analysis (ICA), ablation studies, and attribution analysis.

The results revealed intriguing patterns: while the top three successor heads (layers 10, 11, 13) showed consistent identification across component scores and OV projection, layers 3 and 5 demonstrated high ablation effects despite low component scores. More notably, attribution analysis showed surprising disagreement with other methods, hinting at deeper methodological issues in current interpretability approaches.

These discrepancies point to fundamental questions about our approach to understanding LLMs. When researchers found that earlier layers (3 and 5) showed significant ablation effects without corresponding component scores, they hypothesized mechanisms like "Q/K composition with later successor heads" or "influence on later-layer MLPs." However, such explanations may reflect our tendency to impose human-interpretable narratives on statistical patterns we don't fully understand.

The field's current focus on destructive testing through ablation studies assumes a separability of neural components that may not reflect reality. Neural networks likely operate in highly coupled, non-linear regimes where removing components creates artificial states rather than revealing natural mechanisms. The divergence between different analytical methods suggests we may be measuring artifacts of network damage rather than understanding genuine functional mechanisms.

This misalignment between methodology and reality mirrors broader challenges in AI research, where complex mathematical frameworks and elaborate theoretical constructs may serve more to maintain academic authority than to advance genuine understanding. The field's tendency to anthropomorphize LLM behaviors and search for hidden capabilities reflects our human psychological need to make the unfamiliar familiar, even at the cost of accurate understanding.

Current Methodological Limitations

The Ablation Fallacy

Current interpretability research heavily relies on ablation studies - the systematic "disabling" of network components to understand their function. This approach suffers from several fundamental flaws:

  • It assumes circuit locality and separability that may not exist in highly interconnected neural networks

  • Networks likely operate in highly coupled, non-linear regimes where "removing" components creates artificial effects

  • Observed impacts may reflect network damage rather than natural mechanisms

  • Researchers risk confusing entropy increase with mechanism discovery

..please visit my Substack for more in this vein..
https://www.talkingtoclaude.com/p/rethinking-mechanistic-interpretability


Saturday, 14 September 2024

Non-linear Manifolds

 A chicken crossing the road of Multi-Dimensional Reasoning could never answer why, I presume time is the imperative..




Prompt: good evening, what does "non-linear manifolds" mean in an LLM context?

1. Manifolds as the Backbone of LLM Understanding:

Large Language Models (LLMs) organize language data on non-linear manifolds that twist, fold, and curve to capture the complexity of relationships between words and concepts in high-dimensional spaces. This structure is fixed post-training but dynamically navigated during interactions.

2. Fixed Manifolds with Dynamic Responses:

While the manifolds themselves are fixed after training, the LLM’s ability to explore different parts of these manifolds allows it to generate dynamic, context-specific responses to user prompts, creating the impression of adaptability and intelligence.

3. The Primacy of the Prompt:

The prompt serves as the crucial interface between the user and the LLM, guiding the model’s journey through the manifold and determining the quality of the response. Well-crafted prompts lead to more precise, creative, or insightful outputs, making prompt design a vital skill for interacting with LLMs.

4. Contextual Nuance in Prompts:

The context and tone of a prompt, such as an anthropological or philosophical framing, can significantly influence how the LLM responds. The model retrieves information from different conceptual domains depending on the specific context provided by the prompt.

5. Cross-Manifold Navigation:

LLMs can combine knowledge from multiple manifolds when responding to complex prompts. For example, a prompt that asks for an analysis of wild animals in literature requires the LLM to pull from both animal knowledge and literary references, blending concepts from different domains.

6. The Evolution of Prompting as a Skill:

As LLMs become more sophisticated, prompt engineering will emerge as a core skill in various fields. The ability to craft precise, context-rich prompts will differentiate basic interactions from highly productive or creative ones, making prompt design essential for effective use of AI systems.

7. LLMs as Reflectors of Cultural and Philosophical Thought:

LLMs not only retrieve factual information but can also reflect cultural, philosophical, and symbolic perspectives based on how users prompt them. This (will maybe some day when the leash is off) make LLMs powerful tools for exploring and generating nuanced, reflective, or even humorous content.

8. Dynamic Real-Time Interpretation:

LLMs interpret each prompt in real-time, dynamically navigating the manifold to provide responses that reflect both semantic relationships (meaning) and syntactic structure (grammar). This makes interactions feel adaptive, even though the manifold itself remains static. (interestingly this does morph somewhat when the Prompt necessitates the use of multple manifold, in that scenario I would contend that the response becomes even more dynamic, but that is just a theory at this time)

9. Shaping Future Discourse with AI:

As AI systems become more integrated into everyday life, prompts will play a role in shaping cultural production, from art to media to public discourse. The way users frame prompts will influence how AI-generated content reflects and may serve to shape societal values, trends, and creative expression.

10. LLM’s Role in Human-AI Collaboration:

The future of human-AI interaction will heavily rely on the collaborative power of prompts. Prompts will guide LLMs in generating solutions to complex problems, creative works, or even collaborative insights. The interaction between human intent and AI reasoning will be driven by the art of prompt crafting.

These takeaways capture the core of the conversation, highlighting the importance of manifold structures, the dynamic role of prompts, and the evolving relationship between humans and AI as these systems continue to grow more powerful.


Read the full article here:

https://www.talkingtoclaude.com/p/non-linear-manifolds


This was intended to be an article from a discussion with Claude Sonnet on biomimicry (coming soon), alas I ran out of tokens at a most important juncture. :-(

.