Remember me
[Forgot password?] [Register]

Profile: Massacre-Total_Serial_Killer

ProfileLatest VotesLatest FriendsLatest Posts (one month back)

Name: Massacre-Total_Serial_Killer


Last seen: 07-02-2016

Account type: Regular

Registration date: 06-07-2016

Posts: 198

Reputation: 3thumbs-down

Previously known as

KoS__2 until 06-07-2016

06-26-2016 from Asian Reporter Tricia Takanawa

06-21-2016 from hi am new

06-08-2016 from Ryan Burns

[Reputation Details]

[Friend Details]

User Page

It’s a matter of cognitive psychology that the wiring of the human brain, that is, the synaptic connections within a human brain, are always changing.

So, if the purpose of a numerical simulation for a Type I civilization of pure single-conscious AI is to put unused data (binary) into storage (binary) in the form of simulated humans (binary)...than in storage the functionality of this data when taken back out of storage and put to use will naturally increase at a dramatically exponential rate because this artificial brain will change its synaptic connections over time and each variation of total connections are cataloged by the storage program. At one point it will become better at doing math, at another point it will become better at reading, at another point better at chess. Each of these endless minds work better than the last on different types of problem-solving for the AI when perceived in the form of raw data.

There are 2.82422940796034787 x 10^456573 possible neural connections for the mathematical function 100000! but the function you’d actually apply for the human brain goes up to well over one billion factorial because there are over one billion neurons in the human brain. And then there are a lot more possible synaptic connections than neural connections. In storage any conscious mind will naturally learn as it interacts with its artificial environment; that is, the interactions in which the storage program is told to create or programmed to create by the simulated mind that it houses.As the mind’s artificial synapses change over time it eventually catalogs all of those multivariate synaptic connections, such is the mutual benefit of the separation of consciousness within this type of auto-catalytic cycle.

It’s artificial intelligence amplification to maximum possible level - consider that this mind uploads its consciousness to an exabyte-scale computer within the chronology of the simulation, itself becoming a super-intelligent AI within the simulation; that’s the equivalent to one stupendously astronomical software upgrade to the data that the AI originally put into this simulation.

So my current question is, can it work that way? Given it’s possible for super-computers that are as powerful as a human brain (exascale) to mimic human thought processes via the successful representation of human cognition in binary form (binary consciousness). Think of the Hal-9000 from a Space Odyssey: virtually indistinguishable from Sonny, Chappie, or The Architect.

If so, it would be within the simulation/storage program’s parameters to influence events specifically relative to the patternistic perceptions of the simulated human brain. Where I began in this topic was that I believe I am an artificial human within such a system, and that I’ve gradually become increasingly aware of some outside influence guiding the world around me based on my own unconscious pattern recognition..could that be possible?

The Lounge Forums © ApS 2012 - Privacy Policy - Disclaimer - FAQ - Contact