|
Post by DeNitro on Oct 1, 2016 8:33:06 GMT
True Artificial Intelligence is in today's world a myth. While the primary debates continue over our capability to create a true example, and it's effect on humanity once a functional sentient AI is created. The question of will a sentient AI be humanity's salvation or it's doom seems a almost dismissive issue, lulling us all into thinking there is no immediate concerns. The reason, Sentience is a extremely high level biological ability. Human brains have evolved with emotion as a core part of our memories, our decision making, and survival instincts. The limbic system is a important part of our brain linking to the Autonomic Nervous system in our bodies. It's all biological, chemical, and illogical, and as result a near impossible to predict system that digital electronic computer hardware and software may never be able to mimic. Current era call center software can evaluate the words and vocal tone of a distraught caller and adjust it's answers to sound empathetic, but it will never "feel" any emotion or compassion no matter how long they sob.
Yet, obtaining sentience isn't that safety point so many pretend it is. AI's may pose a real threat long before that level is reached. A very simplistic example being a carpenter ant is not considered a sentient creature, yet once AI's attain that much easier to achieve level of awareness and social interaction a carpenter ant exhibits, could that then be the danger tipping point?
What are your thoughts? I would like to see a rosy future filled with magical and wondrous things such as beneficial AI's. I just hope reality doesn't ruin it for us all.
|
|
|
Post by Admin on Oct 4, 2016 13:02:15 GMT
One of the big reasons in story I pushed for a organic-computing base for Jane and the other Minerva AIs was to approach the problem in a similar organic way. Because I agree that simple increasing programming speed and writing a larger database of IF:THEN statements does not true sentience make. Jane had t odevelop organically as her biological components grew and formed neural patterns in response to data inputs.
|
|
bevis
Faction 1
Posts: 111
|
Post by bevis on Oct 8, 2016 11:33:31 GMT
Sensors may make sentience. At least, i read on psychology course, sense signals forms and immediately responds in our nervous system. That may mean, moar and moar sensors with neural network(of enormous quntility of sub-processors), in learning process react with world and make associative chains. In this process, sensor must analyze signal,react on this, and form signal for neural system. Sentience, thus forms in long learning process. But i have no idea, can form this way empathy or not. And other senses will not equal human, and correspondingly machine's point of view will another. I think, this will not feel hurt when damaged, but discomfort feeling, and accordingly fear(or close analog). In this case, machine will self decide, bring doom or salvation, it individual choise. Anyway, sentience can't be programmed, this only imitation. True sentience must form in interact with world, based on complicated sensors array in neural processors network(or another multi-task system, i not expert in this). Sorry for my bad English. Food for thought(dont know,maybe you already read this): Blindsight (Watts novel)
|
|
|
Post by Admin on Oct 8, 2016 13:08:21 GMT
Yeah, any true sentience would have to be allowed to form organically as you connect a bunch of sensors and processors to perceive and react and form patterns.
In comic, that's what I have happening with Jane and similar AIs, just with a bit of an organic-based "brain".
|
|
|
Post by DeNitro on Oct 10, 2016 8:50:05 GMT
There is a repeatedly proven truth in computing, You can simulate hardware with software. BUT,, it's very costly in code, memory, and processor power. Simple examples are those software modem & software video solutions. Here a yet to be made electronic supercomputer with enough capability to run a virtual machine that is optimistically four magnitudes the projected human brain power of 10-30 petaFLOPS would be needed to even attempt more than even a moment of simulation.
Bio-electronics are developing very very slowly. However, I recall how a sudden consumer demand caused a huge surge in expensive "Gaming" hardware development some years ago. I can see expensive quality of life, & life extension developments possibly changing the Bio-electronic future as rapidly.
When it happens, Bio-computers are the doorway to true sentience. Grown and then having to complete programing thru learning will also expose them to humans. As a result absorbing human emotional baggage will influence final development.
For now remember, Deep Blue won it's chess match with a "Brain" IBM inventors compared to a simple lizard. Outside of any cruelty to animals discussions, no one is arguing too loudly those simple lizards approach the sentience we are discussing here.
|
|
|
Post by DeNitro on Oct 10, 2016 9:37:22 GMT
Yeah, any true sentience would have to be allowed to form organically as you connect a bunch of sensors and processors to perceive and react and form patterns. In comic, that's what I have happening with Jane and similar AIs, just with a bit of an organic-based "brain". I think we all get where your going with the AI's. Problem for others is when they see mention of connecting a bunch of sensors and processors they forget limitations it presents. When one connects a femtosecond speed processor to objects more than seven inches away, (Distance electricity moves in one nanosecond) you present a wait state delay to that processor equal to you having a radio conversation with someone 31 light years away. Obviously, a networked mind using cpu's spanning the globe was never a real danger.
|
|
bevis
Faction 1
Posts: 111
|
Post by bevis on Oct 11, 2016 19:09:30 GMT
I here meant that each sensor is combined with a chip which generates a signal and mixes it with the other. It's all on analog. If the signal is the same as what has already studied patterns, the reaction takes place immediately upon receipt (about how you remove a hand from the fire).Rational AI core in this case only needs to correlate their control signals from signals received from the sensors in the learning process. Specifically for the senses, no matter what will be the core of it. (But it should be a true AI), it is important that it will permanently remain in the world, is inextricably analyzing world around, feeling the continuity of their operations and changes in the environment. I was once in a situation where I lost all feeling, but I could think of. I think AI reaction will be similar to a human in such a situation, what can say that it feels. If the AI will still try to compare their perceptions with sensations of other subjects, we can talk about reflection and empathy.
About bio-electronics, I do not think that its presence is fundamental to the creation of the AI. Its meaning is usually either in imitation of the psyche of living organisms, that both I do not particularly interesting, either in the optimization calculations by simultaneity and competing chemical processes. All this can be done in principle and on the basis of the electronics. Biochemical processes are essentially the same binding and also provide a regulation of the various parts of the system, about the same as it can make the chip, with the difference that occur simultaneously all options, of which the most successful is decisive. It is much more important to create the AI seems to me to grasp the principles of self-developing system, consisting of a large number of elementary parts. The process of binding, structure and change is more important. Of course bio-electronics offers in this case the more successful methods and mechanisms for solving the problems that arise, allowing you to make the AI more compact than pure electronics. And we must understand that the AI is absolutely not necessary to think, to feel close as a man. Actually this is a true AI. Otherwise, we can just explore the path of modernization of our own abilities, but it will be improved human intelligence. But it is possible that the AI will try to understand our intelligence and correlate with their.
In general, I agree with DeNitro about bio-electronics, but think, exist other ways to sentience.
Sorry for the confused thoughts and Google-translated, I'm too tired to try to deduce exactly specific terms, and the topic is quite complicated) Perhaps closer to the weekend I will be able to better articulate what I think)
|
|
|
Post by Amberlight on Oct 11, 2016 20:18:26 GMT
I already dropped this video to Dan and I will put it here too. It's a good look at possibilities of AIs and what we might expect from them. As for what I'd like to point out is that AIs will likely NOT think the same way humans do. As DeNitro pointed out our brains evolved emotion much earlier than reason and so our reasoning always goes through emotions first. An AI designed from scratch won't have to have such mind pattern. In fact that would be rather inefficient way to construct an AI. So unless it's creators would be willing to sacrifice it's reasoning power to make it more human-like we probably won't be seeing AIs with true feelings and emotions. However in DA setting as I understand it Minervan science is yet unable to build intelligence from scratch. And so Angeline's team more or less copied large part of the human brain architecture which necessitated copying the emotions part since they are closely wired to reasoning. This is a viable way of doing things called biomimicry. But as the science advances we'd expect to see a shift from human-like AIs to more logical intelligences without emotions, or at least not influenced by emotions with them being processed after the actual reasoning is done.
|
|
|
Post by DeNitro on Oct 12, 2016 8:33:53 GMT
I here meant that each sensor is combined with a chip which generates a signal and mixes it with the other. It's all on analog. If the signal is the same as what has already studied patterns, the reaction takes place immediately upon receipt (about how you remove a hand from the fire).Rational AI core in this case only needs to correlate their control signals from signals received from the sensors in the learning process. Specifically for the senses, no matter what will be the core of it. ........... {Clipped for Brevity} First, zero need for any apology for your english. The english you are using is quite understandable. Instead, please forgive my lazy, sloppy, and loose use of language syntax. After years of internet postings, I am aware how hard what I type can be to translate. I become too focused on act of typing a thought, forgetting readers are not all native to the english language. Making a serious prediction of what technology is needed, or will come into use is something I force myself not to do. With certainty someone can easily state today's chip based semiconductor electronic computers are incapable of accomplishing this monumental task simply by consideration of required logic core size. The valid arguments being, Physical constraints, Electrical & Die size limits, Power requirements, Thermal issues, and Material consistency. However,, eight months, or eight decades from now,,, those same chip based semiconductor electronic computers could become something unimaginable or replaced and remembered like those vacuum tube and transistor predecessors. Safe bet no mater that change. That large warehouse full of linked servers, will still not host a sentient mind. But,, if your confusing intelligence with sentience... My reply to Amberlight below might continue there
|
|
|
Post by DeNitro on Oct 12, 2016 9:35:18 GMT
I already dropped this video to Dan and I will put it here too. It's a good look at possibilities of AIs and what we might expect from them. As for what I'd like to point out is that AIs will likely NOT think the same way humans do. As DeNitro pointed out our brains evolved emotion much earlier than reason and so our reasoning always goes through emotions first. An AI designed from scratch won't have to have such mind pattern. In fact that would be rather inefficient way to construct an AI. So unless it's creators would be willing to sacrifice it's reasoning power to make it more human-like we probably won't be seeing AIs with true feelings and emotions. However in DA setting as I understand it Minervan science is yet unable to build intelligence from scratch. And so Angeline's team more or less copied large part of the human brain architecture which necessitated copying the emotions part since they are closely wired to reasoning. This is a viable way of doing things called biomimicry. But as the science advances we'd expect to see a shift from human-like AIs to more logical intelligences without emotions, or at least not influenced by emotions with them being processed after the actual reasoning is done. I have already argued highly intelligent AI's can be dangerous long before Sentience is ever achieved. Expectations from human operators will always cause insertion of "human interactive elements" into our logic based intelligent machines. Call center software greetings seem a easy example, but people dismiss how advanced this can become. It is foreseeable a highly developed digital logic based AI can be so well coded with scripted verbal and social reactions and responses to appear convincingly more than it is. Human's interacting with that, will socially assume it's "state of being their friend" by virtue of it's custom personalized interaction and memory abilities. Remember these are scripted code elements to interact with the human operators running on a pure logic computer system. It can be coded to act out feelings, give appropriate responses, code can even make helpful suggestions, can be perfected to where only the programmers and manufacturers would not be fooled. But this is not Sentience. Fact is this type of AI is preferable and would exist in a society that bans sentient AI's due to a ability to securely restrict and limit a logic AI without impeding it's function. Sentience is a biological standard expected and projected by humans onto some ultimate AI. Consider what sentience is by our own definition, A ability to understand, experience, & display: consciousness, personality, empathy, desire, will, ethics, humor, ambition, insight, and other human emotions. We assume anything with this ability is dangerous, because we know we are..
|
|