22 March 'trawl'.

All sorts of amusements and nonsense unrelated to xTalk
Post Reply
User avatar
richmond62
Posts: 5288
Joined: Sun Sep 12, 2021 11:03 am
Location: Bulgaria
Contact:

22 March 'trawl'.

Post by richmond62 »

So, here I am in my school on a Saturday waiting for a supposedly disturbed 17 year old by (in all probability his parents are disturbed as they don't seem to have a drop of understanding about the mental hiccups a 17 year old boy is likely to be going through nowadays), and I'm faffing around on the internet:

https://www.crumplab.com/programmingfor ... ments.html
https://richmondmathewson.owlstown.net/
User avatar
OpenXTalkPaul
Posts: 2836
Joined: Sat Sep 11, 2021 4:19 pm
Contact:

Re: 22 March 'trawl'.

Post by OpenXTalkPaul »

richmond62 wrote: Sat Mar 22, 2025 11:31 am So, here I am in my school on a Saturday waiting for a supposedly disturbed 17 year old by (in all probability his parents are disturbed as they don't seem to have a drop of understanding about the mental hiccups a 17 year old boy is likely to be going through nowadays), and I'm faffing around on the internet:

https://www.crumplab.com/programmingfor ... ments.html
I think I saw this before, linked to from some GitHub-repo if I recall.

Probably xTalk could be as useful for academic research as matlab.
This 'webtalk' effort particularly reminds me a bit of jupyter notebook
https://www.mathworks.com/help/cloudcen ... pyter.html
https://jupyter.org

In fac, since that is FOSS, maybe we could 'borrow' some code of its UI widgets:
https://ipywidgets.readthedocs.io/en/la ... index.html
User avatar
richmond62
Posts: 5288
Joined: Sun Sep 12, 2021 11:03 am
Location: Bulgaria
Contact:

Re: 22 March 'trawl'.

Post by richmond62 »

maybe we could 'borrow' some code of its UI widgets
Beg, borrow, or steal: as long as it is not bound up in some sort of copyright guff: the more the merrier: especially if it can add to the enterprise at hand. 8-)
https://richmondmathewson.owlstown.net/
User avatar
richmond62
Posts: 5288
Joined: Sun Sep 12, 2021 11:03 am
Location: Bulgaria
Contact:

Re: 22 March 'trawl'.

Post by richmond62 »

Probably xTalk could be as useful for academic research as matlab.
I don't think 'probably' comes into the equation.

Here is a quote from my MSc thesis (2004):

"The development of the WIMP-GUI as a de facto standard, and the concurrent development of software authoring packages while
initially intended to make computer systems readily usable by non- computer experts has resulted in a stagnation of the goal of developing packages that are genuinely “usable, without help or instruction, by a user who has knowledge and experience in the application domain but no prior experience with the system.” (Constantine and Lockwood 1999, p.47)

In fact it would be probably safe to say that many software authoring packages that started their lives with this goal as central to their development plans have tended through each development cycle / version to rely on an increased familiarity of the end-user with computer-specific concepts and programming language."

Emphasis mine (added now).

xTalk CAN be useful for academic research, and to a far larger extent than MatLab which is confined to one academic niche.

That is not the problem, and has never been the problem at least for the last 21 years since I submitted my Master's degree thesis.

The 'problem' can be stated this way:

1. Imagine an academic researcher who is at the top of their game academically.

1.1. However old-fashioned my view point it: as far as I am concerned that academic is at the top of their game as a result of some very hard, concentrated work.

1.2. That academic, having focussed all their energies at getting to the top of their academic game has NOT had the time to become even semi-competent in any sort of programming language (they may have used pre-prepared software packages for processing data related to their field).

So we can envisage our theoretical academic wanting to do some innovative research (after all NOT all academics are fossilised from the neck upwards (just most of them . . . ROFL)) that will necessitate developing some new software; and to achieve that there are a few possible scenarios:

1. Said academic has to take 3-6 months away from their field of work to become semi-competent in some sort of computer language and/or programming package.

2. Said academic has to beg.borrow, hijack, or pay for a programmer to develop the necessary software for them.

#1 is pretty obviously a load of unmitigated rubbish as their field of work has to retain their focus.

#2 is problematic as the academic is going to have to explain an awful lot of their field of work to the programmer in a way that the programmer can understand, and the programmer is going to have to explain to the academic what they can do in terms the academic can understand: there will be an awful lot of "err" moments, and several full-on arguments.

Now what I really like about the web-page that I referenced in my initial posting is that it explained HOW one could cobble together a software package in xTalk (LiveCode) in a way that a psychologist might understand.

I would argue that the fact that the web-page creator was able to do that without going off on all sorts of abstruse tangents was just because of the nature of xTalk: it being quite considerably closer to natural language (English) than the vast majority of other ways of programming computers, and a lot more tolerant of things that academics are not going to be paying much attention to (double spaces, and so on).

The learning curve to become semi-competent in xTalk is a whole lot shallower than that of almost all other programming languages.

*I am using the term 'semi-competent' here NOT is an attempt to denigrate academics who would be learners of xTalk, but for 2 other reasons:

1. I believe to become fully competent in xTalk probably takes as much time and trouble as with any other language (having never having achieved full competency in xTalk I am not capable of stating that as a fact).

2. Semi-competency should more than suffice for an academic's requirements re data processing.

Here's another quote from my MSc thesis:

"Educators are hampered in fulfilling their desires to develop bespoke software for content delivery and reinforcement by the requirement that they have to have computer-programming skills as a prerequisite.

Currently available ‘wizard’ interfaces attached to software authoring packages go a small way to solving that problem. However there is a tendency for these interfaces to use ‘computer- speak’ terms to explain themselves; this acts as another barrier to non-specialist empowerment.

The current dependency on the WIMP-GUI means that all available software authoring packages adhere to an in-house style to integrate the package interfaces into the general methodology of WIMP-GUIs. Participants in the two workshops said that they found menu-driven interfaces extremely confusing.

It should not be necessary for specialists in non-computer based disciplines to be computer literate. It should, however be possible for those specialists to rapidly develop software items relating to conveying information about their specialist disciplines without that computer literacy."

Emphasis mine (added now).

-----

And the lesson is: Be careful what you ask for: because you might get a full-blown lecture from Richmond. 8-)
https://richmondmathewson.owlstown.net/
User avatar
OpenXTalkPaul
Posts: 2836
Joined: Sat Sep 11, 2021 4:19 pm
Contact:

Re: 22 March 'trawl'.

Post by OpenXTalkPaul »

Hah! That was great.
Yes, that was one very noble goal in developing HyperCard and other early mac software, in developing they insisted on having constant graphical clues, they wanted it all to be able to be used by anyone, WITHOUT 'read the manual' as much as possible. That is perhaps the ultimate bench mark of good UI design.

And HyperCard was user-modifiable software featuring the concept of 'scripting' rather than programming. You don't so much as write a program as you write out what you want it to do, but in a domain specific subset of English, which has marginal effect of needing far less commentary in the margins than other coding languages. But it relies on a kit of UI components to snap together as needed for whatever you might need it to do, including algorithmicly drawing an picture or playing a randomly generated tune. The difficult bits like, generating sound waves is already taken care of for the script writer, who only needs to tell the note-player 'thiingy' which notes they want it to play.

I also liked the OpenDoc concept of taking user-modifiable software a step further, move towards doing away the concept of 'Apps' all together, instead its a system that makes available prebuilt software components you snap-together like LEGOs to build things into 'live' docs, rather like stacks really. As I may have mentioned I did experiment of replacing the Finder with HyperCard, basically making HC the 'shell' for the OS, honestly I think that could be achieved today but as a Linux distro instead of the classic macOS as 'kernel'.

But if you think about it, isn't moving towards doing away the concept of 'Apps' all together where Siri and Alexa plus AI is headed? I think it has already been headed towards going much further than that, doing away with a 'graphical' widgets and moving towards verbal commands to an AI as user interface. It's certainly gotten much better than the 1990s when I used Apple's 'plaintalk' microphone to trigger an AppleScripts.
User avatar
tperry2x
Posts: 3522
Joined: Tue Dec 21, 2021 9:10 pm
Location: webtalk.tsites.co.uk
Contact:

Re: 22 March 'trawl'.

Post by tperry2x »

richmond62 wrote: Sun Mar 23, 2025 3:11 pm #2 is problematic as the academic is going to have to explain an awful lot of their field of work to the programmer in a way that the programmer can understand, and the programmer is going to have to explain to the academic what they can do in terms the academic can understand: there will be an awful lot of "err" moments, and several full-on arguments.
I can absolutely relate to this. This was the situation sitting down with a very skilled maths teacher (who is frankly over-qualified to be teaching at secondary school level), trying to thrash out javascript in notepad while he explained mathematical concepts "things I should already know" - didn't have any arguments though.
This is the only reason most of TerryL's mathematical functions now exist in the browser implementation I'm putting together. I could not have worked them out myself.
OpenXTalkPaul wrote: Sun Mar 23, 2025 5:39 pm As I may have mentioned I did experiment of replacing the Finder with HyperCard, basically making HC the 'shell' for the OS, honestly I think that could be achieved today but as a Linux distro instead of the classic macOS as 'kernel'.
I'd like someone to do this for Windows. Windows explorer needs an alternative writing for it, and I'm sure you could replace "explorer.exe" with something far better, something made in OXT.
User avatar
richmond62
Posts: 5288
Joined: Sun Sep 12, 2021 11:03 am
Location: Bulgaria
Contact:

Re: 22 March 'trawl'.

Post by richmond62 »

What is bizarre from my point of view is that I wrote what I wrote 21 years ago:

https://www.dropbox.com/scl/fo/805nmv0o ... guyiu&dl=0

and almost all of the discussion is as relevant now as it was then.

The software is out-of-date, although an attempt to create an agent-led software creation package in OXT starting with the same premiss is of course entirely possible.
https://richmondmathewson.owlstown.net/
User avatar
tperry2x
Posts: 3522
Joined: Tue Dec 21, 2021 9:10 pm
Location: webtalk.tsites.co.uk
Contact:

Re: 22 March 'trawl'.

Post by tperry2x »

richmond62 wrote: Sun Mar 23, 2025 6:43 pm ...an attempt to create an agent-led software creation package in OXT starting with the same premiss is of course entirely possible.
You could indeed do that, and that's exactly what Livecode Create does apparently, but essentially it's posting the question to an AI agent, and waiting on the response back.

However, to do that - you can't just put the script in a button, because anyone can see the script of that button - it would reveal the API key needed to access chatGPT for example.

In fact, you can ask ChatGPT about the implications of exposing your personalised API key to all and sundry:
Implications of Exposing Your API Key
  • Unauthorised Use – Anyone can take your key and use it to make API requests, potentially running up your costs.
  • Rate Limits & Throttling – If someone abuses your key, you might hit OpenAI's rate limits, preventing your app from working.
  • Security Risk – If you ever include this key in a public repository (e.g., GitHub), bots might scan and steal it.
If you're distributing your stack, you should never expose the API key to the end user, or allow it to be discovered. Instead, create a small web server (e.g., using PHP) that handles API requests and keeps the key secret.
User avatar
richmond62
Posts: 5288
Joined: Sun Sep 12, 2021 11:03 am
Location: Bulgaria
Contact:

Re: 22 March 'trawl'.

Post by richmond62 »

Personally I'd steer well clear of any artificial "intelligence" mainly for the simple reason, like a Parrot or a Mynah, it is not intelligent, it is just a very clever mimic.
https://richmondmathewson.owlstown.net/
User avatar
tperry2x
Posts: 3522
Joined: Tue Dec 21, 2021 9:10 pm
Location: webtalk.tsites.co.uk
Contact:

Re: 22 March 'trawl'.

Post by tperry2x »

richmond62 wrote: Sun Mar 23, 2025 7:15 pm ...it is just a very clever mimic.
Not even that 'clever' in most cases.
ChatGPT-hallucinations-BBC-News.pdf
(34.97 KiB) Downloaded 97 times
Certainly nobody is being held to account over it's inaccuracies.

The issue with AI - all these algorithms, is that nobody really knows how they reach the conclusions they do. They don't show any reasoning behind how they match up the original query with their dished-out answer.

This makes troubleshooting them really hard. Also, there's no long term memory. If you ask ChatGPT to produce you something as simple as a word document, formatted a specific way, with things like a table of contents and perhaps a few paragraph styles. If you then go back to it later that same day, and ask it to add additional things to the document - it can completely wreck any formatting or wipe out any additional changes you've made. It will likely have forgotten everything you previously 'talked' about. This makes it all a bit of a non-starter. Great for quick queries and suchlike, but anything meaningful where you have to add to the document in iterations - seems to not be able to handle additional edits. Combined with the fact it has a really short term memory, isn't a good combination.
User avatar
OpenXTalkPaul
Posts: 2836
Joined: Sat Sep 11, 2021 4:19 pm
Contact:

Re: 22 March 'trawl'.

Post by OpenXTalkPaul »

With ChatGPT if you return to the same chat topic I think it helps with focusing its 'train of thought'. It still has long-term memory problems, but often reminding it of previous correct answers on a subject or explaining to it why the answer was wrong (for example a user reply like 'WRONG, that's BASIC not xTalk script) is enough to get it back on track for the topic.

What I mean is I have 'questions' like "writng Extension Builder modules" where I keep coming back to the same 'conversation' to trained it on what 'LCB' is about, how the the syntax is related to and is different from xTalk / LiveCode Script, etc. Then there's the more nuanced things like explaining to it how foreign function handlers should be declared. Surprisingly at first it already understood some of it, I'm sure it's scanned through examples on GitHub and probably from forums posts and email list too, and certainly somewhat also derived from other people using it for the same purpose. So yeah it is pretty much a plagiarism machine (at least Google's AI summary shows links to where it pulled its summary info from). But LLM AIs understand or tries to understand the language model, and adheres to that model when generating answers. Very useful for programming.

The thing is I already know what I want from the AI, and so when it 'hallucinates' I know that it's pulling answers out from its ass or mixing things up with similar languages (usually mixing in xTalk script but sometimes BASIC, C, Pascal ). Many of the LCB answers look formatted in a way that makes me think it has scanned MY Github repos, which is probably the bulk of LCB examples available on there, it names variables just the way I would've named them. But I'm OK with that, it's already open-source stuff, I want the thing to be an expert on what I'm trying to get it to do and It gets better at doing it for me.

That's the thing about AI, it gets better at everything it gets trained on, and big AI has had many people making low-wages to train AIs to eventually take their jobs. You'll often have to talk to an AI voice when you call for customer support already. I think we'd better hurry up and get to a 'Star Trek' style Utopian society where no one needs to work and can just go get something to eat from the replicator for free when they're hungry.

But in the meantime (possible copyright complications aside) it can be an extremely helpful tool. I think avoiding AI would be equivalent to avoiding pocket calculators in the late 1970s.
Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests