• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

Venus, Velikovsky & Miscellaneous Speculations

Ghostisfort said:
I would be interested to know just how an ability to do math' is selected for a hunter gatherer?

Well...a caveman would have to make some kind of mental judgements quickly for things like how far to run, jump or throw a spear. No numbers would be involved, but the mental agility required to do this would evolve into the ability to do mental arithmetic.
 
Mythopoeika said:
Ghostisfort said:
I would be interested to know just how an ability to do math' is selected for a hunter gatherer?

Well...a caveman would have to make some kind of mental judgements quickly for things like how far to run, jump or throw a spear. No numbers would be involved, but the mental agility required to do this would evolve into the ability to do mental arithmetic.

How?
 
It is only a theory of evolution, after all.

No experriment has yet been devised to test its central tenet, that completely new species can be created by random mutation from existing species, although the creation of different breeds (I don't know the correct biological term) is confirmed by animal domestication, among other things. You can also breed closely related species and get hybrids but I don't think that's the mechanism the theory uses.

I don't put this forward as an argument for the existence or non-existence of anything, I'm just pointing out that the theory of evolution is no more or less than the most commonly accepted explanation for the diversity of life, its not a 'fact'. It has partial supporting evidence, but it also poses some currently unsolved problems.
 
Ghostisfort said:
Intervention is more logical.

Only if all other avenues have been proven to be dead-ends in terms of research, proof, theory etc.. Otherwise it's known as 'jumping the gun'.
 
This is still off-topic slightly for a Venus thread, but it's interesting, so heigh-ho...

Intervention seems unlikely, since it would have to have happened over a very long period of time. Modern humans appeared very early in the African continent, some examples dating back to 250,000 years ago.
http://en.wikipedia.org/wiki/Recent_Afr ... mo_sapiens
Symbolic behaviour seems to have started to emerge quite a long time ago in southern Africa, as evidenced by these beads from Blombos Cave, 75,000 years ago.
http://en.wikipedia.org/wiki/File:BBC-shell-beads.jpg
So it seems likely that complex human behaviour emerged very gradually, rather than all arriving in a rush, as suggested by the sapient paradox of Renfrew.

Humans and hominins have been developing increasingly complex behaviour and increasingly complex cultural toolkits for millions of years; this poses the question - when exactly is intervention supposed to have occured? In the Oldowan period, the Lower or Upper Palaeolithic, or the Neolithic? The Bronze age of Eurasia, and or the early Mesoamerican period, much later? My drokk, they were at it constantly.

It seems as well that if intervention did in fact occur when early modern humans first evolved, the effects of that intervention did not manifest themselves until tens of thousands of years later, which seems very hit-and-miss.
 
Ghostisfort said:
Mythopoeika said:
Ghostisfort said:
I would be interested to know just how an ability to do math' is selected for a hunter gatherer?

Well...a caveman would have to make some kind of mental judgements quickly for things like how far to run, jump or throw a spear. No numbers would be involved, but the mental agility required to do this would evolve into the ability to do mental arithmetic.

How?

You're asking me how evolution works?
 
Mythopoeika said:
Ghostisfort said:
Mythopoeika said:
Ghostisfort said:
I would be interested to know just how an ability to do math' is selected for a hunter gatherer?
Well...a caveman would have to make some kind of mental judgements quickly for things like how far to run, jump or throw a spear. No numbers would be involved, but the mental agility required to do this would evolve into the ability to do mental arithmetic.
How?

You're asking me how evolution works?
When speaking of evolution it's inevitable that circular reasoning enters the discussion.
'It's evolution because that's what evolution does".
But what is evolution?
It's the thing that brought the biosphere to what it is today.
the mental agility required to do this would evolve into the ability to do mental arithmetic.
An assumption based on the premiss that evolution is literal.

There is absolutely no evidence that the brain uses numbers. This is an assumption based on computers. Because some computer programs with clever programming appear to answer questions, that is how the brain works...simple... wrong. There is no binary code in thinking and AI is a myth.
To have AI, we would need to know how thinking works and no one knows.
Computers don't think and the brain does not use binary to throw a spear.
 
Ghostisfort said:
There is no binary code in thinking...
Wrong! The brain is a complex assemblage of cells called neurons, and these do have 'On or Off' states, ie, they are binary. In more detail:
All neurons are electrically excitable, maintaining voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated, which travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

http://en.wikipedia.org/wiki/Neuron
(As for A.I, that has its own thread.)
 
rynner2 said:
Ghostisfort said:
There is no binary code in thinking...
Wrong! The brain is a complex assemblage of cells called neurons, and these do have 'On or Off' states, ie, they are binary. In more detail:
All neurons are electrically excitable, maintaining voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated, which travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

http://en.wikipedia.org/wiki/Neuron
(As for A.I, that has its own thread.)
:rofl:


Reductio ad absurdum.
I can only say, "Good Luck" to anyone who can connect the above to actual thinking, although it sounds intellectually impressive.

This has always been a problem for science, that the brain and its electrical impulses are material in nature, but thinking is not material. Science cannot handle the non-material and so reduces thinking to a mundane mechanical as did the Victorians, or electrical as is the trend today.

I don't think many are fooled by this blinding by scientific jargon, extreme materialism.
We are not machines. Machines are not creative.

This is typical of the dehumanisation process that science uses to defend science. We can see the reason why, when there are more scientists than ever before, the productivity of science is less than ever before. They have themselves become automatons.
 
Ghostisfort said:
This has always been a problem for science, that the brain and its electrical impulses are material in nature, but thinking is not material.
This is another of your unfounded assertions. Can you define 'thinking', and prove it cannot be based on material systems (such as electronic logic gates, neuron assemblies, etc.)?

Assumptions made without evidence, let alone proof, are worthless unless they suggest where evidence or a proof might be found. Your empty words do neither.
 
Ghostisfort said:
....

This has always been a problem for science, that the brain and its electrical impulses are material in nature, but thinking is not material. Science cannot handle the non-material and so reduces thinking to a mundane mechanical as did the Victorians, or electrical as is the trend today.

I don't think many are fooled by this blinding by scientific jargon, extreme materialism.
We are not machines. Machines are not creative.

.....

Machines aren't creative yet is all you can say. Thought seems to be an emergent characteristic arising from the high degree of interconnectivity and interactions between neurones. You won't know yet whether thought will emerge from machine until the interconnectivity and feedback systems in computers get a lot more complex than they are not. IMO you'll get a mind, it's the connections and patterns that make a mind rather than the substrate.
 
rynner2 said:
Ghostisfort said:
This has always been a problem for science, that the brain and its electrical impulses are material in nature, but thinking is not material.
This is another of your unfounded assertions. Can you define 'thinking', and prove it cannot be based on material systems (such as electronic logic gates, neuron assemblies, etc.)?

Assumptions made without evidence, let alone proof, are worthless unless they suggest where evidence or a proof might be found. Your empty words do neither.
I have always built my own computers, just because I like doing it.
One of the things I've noticed is that the one I built around a year ago - the one I'm using now, is no different in its basics than computers of the eighties. Faster, more memory and that's about it? It does not even use its multi-core processor to its full capacity.

Now, when I'm told that my brain is a computer I tend to think about the progress made over the past decades and shake my head. I'm not a computer, I'm nothing like a computer.

The reason scientist think they are computers is because they're programmed and use the science database, nothing else, just like a computer, whereas I like to think for myself.
Thinking is about new ideas and imagination, something discouraged by both science and religion - straying outside the database.
(Don't you think it strange that they do so many things alike?)

Evidence and proof of the kind you would prefer are not available in the scientific database. Thinking about thinking is something abandoned by science long ago...computers don't daydream.
My own proof is to assume that anything dehumanising is logically wrong and go from there.

Timble2

Using a complexity from the future as proof of a present-day myth, is not very convincing.
 
Ghostisfort said:
Now, when I'm told that my brain is a computer I tend to think about the progress made over the past decades and shake my head. I'm not a computer, I'm nothing like a computer.

Not yet, at least. I say this because, at the moment, computers aren't capable of being enough like us. We have a way to go yet before that happens. And they're unlikely to be anything like the sort of computer you're using now when posting on the FTMB. So when you think of 'computer' you may be being too literal in terms of what exists now and what's existed in the past.
 
Do you think your home PC is anything like, say, the K supercomputer? Even something like that isn't anywhere near being as capable as a human brain, your home PC even less so (to put it mildly). It also depends on other factors, such as future materials and production methods. Then add to that programming, architecture methodologies, etc..

That said, perhaps something like Blue Brain may be one step. Others here may be able to provide other examples.
 
Yep, maybe the relevant bits of this thread need to be pasted into that one ;)
 
Jerry_B said:
Do you think your home PC is anything like, say, the K supercomputer? Even something like that isn't anywhere near being as capable as a human brain, your home PC even less so (to put it mildly). It also depends on other factors, such as future materials and production methods. Then add to that programming, architecture methodologies, etc..

That said, perhaps something like Blue Brain may be one step. Others here may be able to provide other examples.
The term "Super Computer" refers to speed and not special function and if you look at Wiki on the subject, you will see that this is made clear.
Just like the home PC has done since the eighties and before, but faster.

There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers. This is what computers are good at, adding. There is not a shred of evidence that our brain crunches numbers in order to think.

I see no evidence that programming is advancing, apart from bigger programs based on more old programs. The methods used in programming have not changed since the seventies.
Blue Brain
But the EPFL team’s work demonstrates that some of our fundamental representations or basic knowledge is inscribed in our genes. This discovery redistributes the balance between innate and acquired, and represents a considerable advance in our understanding of how the brain works. http://actu.epfl.ch/news/new-evidence-f ... owledge-5/
The need then arises to show how information is moved from gene to brain and where in the gene all of this info is stored. Once the information is transferred, how does it think, daydream, solve problems, be happy, be sad, all with binary numbers?
Where did all of history's 'new' ideas come from if we started with no database?
Where is the program that runs the data?

All of these problems need to be solved before we can say we have any answers and they are not even addressed, not even with an adding machine.

I'll stick with the collective consciousness in the aether - it works. :kissers:
 
Ghostisfort said:
There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers.
Expert systems.

Watson is currently being put to use diagnosing illness.
 
kamalktk said:
Ghostisfort said:
There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers.
Expert systems.

Watson is currently being put to use diagnosing illness.

These are the simplest kind of programs, IF THEN GO TO.
These have been around as long as we have had computers. An expert provides the information which the programmer arranges to answer pre programmed questions.
 
Ghostisfort said:
That said, perhaps something like Blue Brain may be one step. Others here may be able to provide other examples.
The term "Super Computer" refers to speed and not special function and if you look at Wiki on the subject, you will see that this is made clear.
Just like the home PC has done since the eighties and before, but faster.

There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers. This is what computers are good at, adding. There is not a shred of evidence that our brain crunches numbers in order to think.[/quote]

I don't think that you've understood my point.

I see no evidence that programming is advancing, apart from bigger programs based on more old programs. The methods used in programming have not changed since the seventies.
Blue Brain
But the EPFL team’s work demonstrates that some of our fundamental representations or basic knowledge is inscribed in our genes. This discovery redistributes the balance between innate and acquired, and represents a considerable advance in our understanding of how the brain works. http://actu.epfl.ch/news/new-evidence-f ... owledge-5/
The need then arises to show how information is moved from gene to brain and where in the gene all of this info is stored. Once the information is transferred, how does it think, daydream, solve problems, be happy, be sad, all with binary numbers?
Where did all of history's 'new' ideas come from if we started with no database?
Where is the program that runs the data?

All of these problems need to be solved before we can say we have any answers and they are not even addressed, not even with an adding machine.

Bigger programs based on old programs is alot like the way the human brain has evolved. Are you expecting programming to always evolve out of thin air and to work seamlessly with older/other programming? I'm sure a programmer would roll his/her eyes at that ;) None of the other things you mention are going to happen next week, so you may be jumping the gun a bit to exclude the possibility that some form of computer in the future may be able to do the same things as a human brain.

In general I'd say you need to look at the AI thread, so that this one doesn't go OT.

I'll stick with the collective consciousness in the aether - it works. :kissers:

How so, and is there any evidence for it?
 
Ghostisfort said:
kamalktk said:
Ghostisfort said:
There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers.
Expert systems.

Watson is currently being put to use diagnosing illness.

These are the simplest kind of programs, IF THEN GO TO.
These have been around as long as we have had computers. An expert provides the information which the programmer arranges to answer pre programmed questions.
If they were IF THEN GO TO, they would answer with 100% certainty as there would only be one path through to the answer, and only one possible answer. Expert systems produce probabilities and answer based on what they think is most likely (or don't if no possibility is likely). Even if the functional layout is different, it's the same process we use, making a best guess.
 
Jerry_B said:
Bigger programs based on old programs is alot like the way the human brain has evolved. Are you expecting programming to always evolve out of thin air and to work seamlessly with older/other programming? I'm sure a programmer would roll his/her eyes at that ;) None of the other things you mention are going to happen next week, so you may be jumping the gun a bit to exclude the possibility that some form of computer in the future may be able to do the same things as a human brain.

In general I'd say you need to look at the AI thread, so that this one doesn't go OT.
"Some form of computer in the future" is just speculation. I'm surprised that someone who describes himself as an IT teacher, does not seem to know how a computer works?

The AI thread is about mythology, as there is no AI.
 
kamalktk said:
Ghostisfort said:
kamalktk said:
Ghostisfort said:
There is nothing in a supercomputer that anyone can compare to a human brain. A computer is the perfect idiot and asking it to solve problems is a waste of time - it crunches numbers.
Expert systems.

Watson is currently being put to use diagnosing illness.

These are the simplest kind of programs, IF THEN GO TO.
These have been around as long as we have had computers. An expert provides the information which the programmer arranges to answer pre programmed questions.
If they were IF THEN GO TO, they would answer with 100% certainty as there would only be one path through to the answer, and only one possible answer. Expert systems produce probabilities and answer based on what they think is most likely (or don't if no possibility is likely). Even if the functional layout is different, it's the same process we use, making a best guess.

Such programs work like a flow chart, there are branches for various symptoms. All of the symptoms are entered beforehand and assigned a variable (number).
The computer looks for symptoms (numbers) that match those in the database.
So if it asks, " do you have a sore thumb" You answer "yes" and it asks about other symptoms until they meet the requirement for a diagnosis.

What's happening in the computer is that it's asking for yeses and no's to meet a required number of yeses. IF (number of yes = 3) Then GO TO LINE 123, where it prints: "You need to go to hospital". This is not a decision of the computer, but a pre programmed instruction to print a sentence already in the computer, assigned a variable number.

The diagnosis has already been done in advance and the computer just prints it out.

I did something very similar myself in the 1980's and called it 'Sympathy'. It kept my wife and daughter amused for hours. They were however talking to themselves without realising it. This was with an old Commodore 64.
It was based on something like this: http://en.wikipedia.org/wiki/ELIZA

I can't remember exactly, but it would ask, PRINT:"How are you today". The answer would be something like, "I'm feeling unhappy".
The computer would be instructed to retain the words "feeling unhappy" and use it at the end of its answer. PRINT: What is the reason for your "feeling unhappy". It seems to be a conversation, but it's not.

You can talk to one of these programs here: http://www.manifestation.com/neurotoys/eliza.php3
 
Ghostisfort said:
"Some form of computer in the future" is just speculation. I'm surprised that someone who describes himself as an IT teacher, does not seem to know how a computer works?

Your point being? I doubt I've described myself as an IT teacher - especially as I'm not one.

We're all speculating here, especially with the latest batch of posts. The point is, if you look at your own PC and then say that 'I am nothing like that' and then say that the idea of computer intelligence is therefore a bunk idea, you're excluding yourself from thinking about various possibilities. And instead you then assume, for some reason, that the 'logical' path points at 'collective consciousness in the aether' (whatever that may be).

The AI thread is about mythology, as there is no AI.

Really? You may have to elucidate such a statement in the relevant thread then!
 
I already did that some time ago.
Weizenbaum tells us that he was shocked by the experience of releasing ELIZA (also known as "Doctor") to the nontechnical staff at the MIT AI Lab. Secretaries and nontechnical administrative staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy. Weizenbaum was shocked by this and similar incidents to find that such a simple program could so easily deceive a naive user into revealing personal information. http://www.alicebot.org/articles/wallace/eliza.html
Weizenbaum thinks that people who mistake computers for intelligence are nuts.
 
Jerry_B said:
Ghostisfort said:
"Some form of computer in the future" is just speculation. I'm surprised that someone who describes himself as an IT teacher, does not seem to know how a computer works?

Your point being? I doubt I've described myself as an IT teacher - especially as I'm not one.

We're all speculating here, especially with the latest batch of posts. The point is, if you look at your own PC and then say that 'I am nothing like that' and then say that the idea of computer intelligence is therefore a bunk idea, you're excluding yourself from thinking about various possibilities. And instead you then assume, for some reason, that the 'logical' path points at 'collective consciousness in the aether' (whatever that may be).

The AI thread is about mythology, as there is no AI.

Really? You may have to elucidate such a statement in the relevant thread then!

Why do you keep reminding me about off topic when it was you and your side-kick who started this?
 
Your post on the previous page - There is absolutely no evidence that the brain uses numbers. This is an assumption based on computers. Because some computer programs with clever programming appear to answer questions, that is how the brain works...simple... wrong. There is no binary code in thinking and AI is a myth.
To have AI, we would need to know how thinking works and no one knows.
Computers don't think and the brain does not use binary to throw a spear
- seems to have kicked off the side-tracking... ;)
 
Based on repeated basic mistakes, I've come to the conclusion that I can click the ignore button without consequence.
 
Back
Top