... Ok. I think what I am getting at is if an AI, being 'intelligent' creates algorithms of it's own, is it possible for human engineers to work out how it did it ?
Bottom Line: It isn't, it doesn't, and it's rarely possible to any useful extent.
Here's a highly condensed set of reasons why ...
(1) No AI is 'intelligent' in any sense that correlates with what we like to call human intelligence. They are designed to mimic behaviors that we would consider equivalent to what an 'intelligent' human does.
(2) There have been such things as software applications that can self-organize, re-organize, and / or generate new additions to their own code base. Generally speaking, however, AI's don't re-write their own code by which they do inferences over a base of data and rules. 'Machine learning' has always been directed toward manipulating such data and / or inference rules (e.g., criteria values / weights) rather than re-wickering the logic that works upon these things.
Let me try an illustrative analogy ... Let's say you rely on a handbook for every step of doing a job (e.g., working on a 'case' of some sort). Let's say this handbook is the definitive guide for both relevant data (e.g., measurements, specifications, etc.) and the rules for how to work with that data in light of inputs / changes. Now let's say you modify the handbook over time by (e.g.) adding updated pages / sections, annotating it, etc. Your brain doesn't change, but the reference guide 'out there' does. Shifts in behavior are more a matter of changes to the guidelines / rules / data 'out there' in the handbook. In an analogous fashion, classic machine learning systems abstract the relevant data and rules 'out there'.
(3) Neural-style AI's have to be trained until they seem to yield acceptably 'good' results. Generally speaking, such neural type implementations with machine learning capabilities are black boxes providing no means for determining (step by step) how they adapt / evolve over time. They are relatively easy to train up to acceptable performance, but the trade-off is that they do what they do and there's little basis for following what's going on inside them. In other words, they're relatively straightforward to set up and start using, but relatively opaque to subsequent inspection.
(4) The ability to deconstruct and analyze the course of machine learning requires representation and retrieval of data on what transpired as the AI operated. Phrased another way, it's like asking a friend or relative, "What were you thinking?". You can't figure out how they got to an eventual state without knowing both what data they were relying upon in the moment and what (if any ... ) rule(s) they were using to determine subsequent responses / actions.
Older - 'symbolic' rather than neural - AI systems sometimes afforded the ability to do such retrospective dissection / analysis, insofar as they were just more sophisticated versions of other advanced software systems. The amount of time / effort and probability of actually understanding how it 'got to where it ended up' depended on how it was implemented and whether the designers / developers had built in any debugging, tracking, or analysis capabilities. This resulted in the opposite case from the neural approach - the initial coding and tweaking took a long time, but eventual debugging / analysis could be made easier.