@Graingy and for the cherry on top, the parameters for negative feedback are to be narrowed to continue to evolve. A self perpetuating AI with no choice to create other AIs better than itself.
@Graingy I believe that an AI that can feel pain in a sense of outputting a negative feedback. You have to remember that they are simply code. An AI that feels pain when it does something wrong, will more likely than not attempt to achieve perfection. The likely end result would be constant pain that is unfixable due to not being able to achieve any further. That would be considered the end of it's lifespan. Assuming we get many of these AI's and used something like a generational model, the AI could in theory evolve to perfection over many generations.
@Graingy Possibly. But to aim for a more intelligent AI, it must be able to "feel" misery, as that is an effective way to make it either more humanitarian, or much less humanitarian. Either way the principle of allowing pain to be felt is one that will benefit the AI network.
@Graingy you do sound quite appropriate. Now to the matter at hand. To commit the act of murder you must kill a being. The question of wether AI is a being or not could be discussed, but it is currently irrelevant.
The topic was to cause the device to "feel" pain, which is not killing it. Instead it is prolonging what humans would describe as misery. So henceforth the question is not is it murder that it is ok, but committing misery.
@Graingy please revise the previous comments made that were furthering our discussion. Perhaps we've evolved beyond the basic principle of the initial discussion topic.
@Graingy to bring us back to the initial discussion topic, we are good people. And to behave such as that of an early hominid is really to behave of modern individual due to the fact that we are modern day individuals.
@Graingy I did. However this miscommunication or misconception also contributed valuable information to our thesis about humanity. As it only further validated our hypothesis.
@Majakalona bro he's probably just running it to have better plane performance.
+1@Graingy quite so
+1@Graingy correct
+1@Graingy to judge is those who are logical
@Graingy correct. But is the perfect one worth more than the imperfect many?
@Graingy maybe to an emotional being as yourself and I. However to a purely logical being it is yet to be determined.
@Graingy "does your existence justify the suffering"
@Graingy well, If I created new AIs once perfect. Then It could be asked one singular question. "Do the means justify the ends?"
@Graingy and for the cherry on top, the parameters for negative feedback are to be narrowed to continue to evolve. A self perpetuating AI with no choice to create other AIs better than itself.
@Graingy correct. Make the objective to achieve creation of an AI that is better than it. Allowing for the eventual creation of perfection.
@Graingy we could allow avoiding the negative to cause a negative feedback. How's that for stimulating thought?
Sorry man, I hope you get better quickly
@Graingy I believe that an AI that can feel pain in a sense of outputting a negative feedback. You have to remember that they are simply code. An AI that feels pain when it does something wrong, will more likely than not attempt to achieve perfection. The likely end result would be constant pain that is unfixable due to not being able to achieve any further. That would be considered the end of it's lifespan. Assuming we get many of these AI's and used something like a generational model, the AI could in theory evolve to perfection over many generations.
Nice Job @MrCOPTY
+1@Graingy Possibly. But to aim for a more intelligent AI, it must be able to "feel" misery, as that is an effective way to make it either more humanitarian, or much less humanitarian. Either way the principle of allowing pain to be felt is one that will benefit the AI network.
@Graingy you do sound quite appropriate. Now to the matter at hand. To commit the act of murder you must kill a being. The question of wether AI is a being or not could be discussed, but it is currently irrelevant.
The topic was to cause the device to "feel" pain, which is not killing it. Instead it is prolonging what humans would describe as misery. So henceforth the question is not is it murder that it is ok, but committing misery.
@Graingy please revise the previous comments made that were furthering our discussion. Perhaps we've evolved beyond the basic principle of the initial discussion topic.
@Graingy to bring us back to the initial discussion topic, we are good people. And to behave such as that of an early hominid is really to behave of modern individual due to the fact that we are modern day individuals.
@Graingy that would be correct. Act like the early neanderthals which we would consider inferior.
@DOYOUMIND it does not
@Graingy so as per the previous statement. Act like a caveman.
@TheTomatoLover me too
Interesting
+1@Graingy I did. However this miscommunication or misconception also contributed valuable information to our thesis about humanity. As it only further validated our hypothesis.
@Graingy modern humans ain't that smart either
@Graingy even the smart cavemen are dumb in comparison to today's people.
@weeeeeeeeeeeeeeee yeah it carries out it's intended purpose
Single Engine
@Graingy so be the caveman for tomorrow
@Graingy correct.
+1Link to Video
@Graingy for those who improve will only be seen as degenerates to the future generations
Nice
@Zaineman thanks
@EarthAndMoon yeah
Hyperion Underwater Aircraft has been released
Link to Plane
@Graingy no. Purity must be overcome.
@Englishgarden you read nothing
@Graingy no we are definitely good people
+1@Graingy we are good people
+1It's coming back
@Graingy yes we are good people. Now let's carry out our good people plans.
+1@Graingy good idea
Looks good, I like it.
+1@titchbickler123 yeah, the CoM and CoL seem to be positioned well.
Yoooooo Awesome
@MrKtheguy Definitely a mark
that's gonna leave a mark
Nice, I like it
pigpen tears
+1