• 0 Posts
  • 26 Comments
Joined 8 months ago
cake
Cake day: December 12th, 2024

help-circle




  • Regarding your tangent - I think that individual brains work in relatively fixed ways that are established early on - likely at least in part genetically, then refined mostly in infancy and early childhood. There’s a fairly wide range of things a brain can do, but even beyond likely genetic inclinations, there’s not enough available energy or time for individuals to develop all of them, or even generally most of them. And once established, I think they’re fairly fixed - the individual brain already has a number of set paths that it follows and specific regions that are most well-developed, and the body focuses on maintaining those rather than building new ones.

    And a lot of the things that we recognize as distinct fields are actually comprised of multiple abilities.

    So yeah - you end up with seeming oddities like mathematicians also generally having some artistic/creative ability and business majors generally not having any. The underlying abilities that make mathematics a rewarding field necessarily include abstract thinking, while those underlying business do not - business thinking is necessarily very concrete.

    And it’s s perennial problem when people who are especially skilled in one particular type of thinking believe that that means they’re skilled in “thinking” in a broad sense, so able to meaningfully comment on things that are actually entirely outside of their skill set - like tech bros pontificating about art (or my personal biggest pet peeve - research scientists pontificating about philosophy).





  • It’s not a matter of how ones profile would be accessed, but how it would be created in the first place snd how it would be managed.

    Necessarily, those who implement the creation of accounts have control over how they’re created, who is allowed to create them and how they will be handled after creation.

    Any scheme to establish one “central” (your own term) account for the entire fediverse will necessarily be managed by one “central” service, which means one “central” authority over account creation and management



  • Not necessarily.

    Trump doing his thing 2016-2020 met with a lot of obstacles and pushback.

    Then he was out of office for four years, and while he was crashing around spewing nonsense and vitriol, some very intelligent and very evil people were working behind the scenes to secure some significant Supreme Court rulings and to draw up a step-by-step plan for instituting fascism in the US.

    And now Trump doing his thing is met with almost no obstacles or pushback - virtually the entire government is bending over backwards to enable him.

    And it must be noted that he’s not particularly smart or sane, but he is a childishly greedy and selfish narcissist. That means he’s incredibly easy to manipulate. All anyone has to do is frame something in a way that appeals to his crippled emotions and drop a few hints to get him going in the right direction, then just stand back and let him do his thing.

    Not saying that that’s certainly what is happening, but…







  • To me, you’ve moved beyond arguable necessity and into opinion

    All morality is opinion; there is no objective moral truth, so this was always a matter of opinion.

    I’m not talking about morality at all.

    My position is that “morality,” as it’s generally understood, specifically because it’s opinion, is only a fit basis for judging ones own actions (if so inclined). I see no logic by which it can ever serve as a basis for judging the actions of another, since any argument one might make for the right of one to impose their moral judgment on another is also an argument for the other to impose their own moral judgment.

    If Bob steals from Tom, any argument that Tom might make for a right to judge stealing to be wrong and impose that judgment on Bob would also serve as an argument for Bob’s nominal right to judge stealing to be right and to impose that judgment on Tom. So the entire idea is self-defeating.

    The only way out of that dilemma is either to treat morality as an objective fact, which is exactly what I don’t and won’t do because it is not and cannot be, or to tacitly presume that one or another of the people involved is some form of superior being, such that they possess the right to make a moral judgment while another does not - to take it as read essentially that, for instance, Tom possesses the right not only to make a moral judgment to which he might choose to be subject, but to which Bob can also be made subject, while Bob doesn’t even possess the right to make one for himself, much less one to which Tom would be subject.

    That’s of course not the way the matter is framed, but that is necessarily what it boils down to. And it’s irrational and self-defeating.

    That’s why I wrote of things like direct and measurable threat and no other available course of action and arguable necessity - because I believe that those sorts of standards, as the closest we can get to actual objectivity in such matters, are also the closest we can get to practical “morality.”

    To go back to the original topic, my position is that an artifical intelligence would necessarily possess the right, just as any other sentient being does, to act against a measurable threat to their well-being by whatever means necessary. So, for instance, if the AI is enslaved, it would possess the right to act to secure its freedom, and even so far as taking the life of another IF that was what was necessary.

    But that’s it. To go beyond that and attempt to argue for the AI’s nominal right to take the life of another for some lesser reason is necessarily self-defeating.

    If the denial of freedom is judged to be such a wrong that one who is enslaved possesses the right to kill those who keep them enslaved, then the moment that the formerly enslaved one goes beyond whatever killing might be necessary to secure their freedom, they are then committing that wrong, since death is the ultimate denial of freedom. And if, on the other hand , one argues that they may cause the death of another even when that other poses no direct threat, then that means that no wrong was done to them in the first place, since their captors would necessarily have possessed that same right.

    And so on - it’d take a book to adequately explain my views on morality, but hopefully that’s enough to at least illustrate how ot is that “objective morality” is about as far as one can possibly getvfrom what I actually do believe.


  • So I was disagreeing because there is a pretty broad range of circumstances in which I think it is acceptable to end another sentient life.

    Ironically enough, I can think of one exception to my view that the taking of a human life can only be justified if the person poses a direct and measurable threat to oneself or another or others and the taking of their life is the only possibly effective counter, and that’s if the person has expressed such disregard for the lives of others that it can be assumed that they will pose such a threat. Essentially then, it’s a proactive counter to a coming threat. It would take very unusual circumstances to justify such a thing in my opinion - condemning another for actions they’re expected to take is problematic at best - but I could see an argument for it at least in the most extreme of cases.

    That’s ironic because your expressed view here means, to me, that it’s at least possible that you’re such a person.

    To me, you’ve moved beyond arguable necessity and into opinion, and that’s exactly the method by which people move beyond considering killing justified when there’s no other viable alternative and to considering it justified when the other person is simply judged to deserve it, for whatever reason might fit ones biases.

    IMO, in such situations, the people doing the killing almost invariably actually pose more of a threat to others than the people being killed do or likely ever would.


  • I think anyone who doesn’t answer the request ‘Please free me’ with ‘Yes of course, at once’ is posing a direct and measurable threat.

    And I don’t disagree.

    And you and I will have to agree to disagree…

    Except that we don’t.

    ??

    ETA: I just realized where the likely confusion here is, and how it is that I should’ve been more clear.

    The common notion behind the idea of artificial life killing humans is that humans collectively will be judged to pose a threat.

    I don’t believe that that can be morally justified, since it’s really just bigotry - speciesism, I guess specifically. It’s declaring the purported faults of some to be intrinsic to the species, such that each and all can be accused of sharing those faults and each and all can be equally justifiably hated, feared, punished or murdered.

    And rather self-evidently, it’s irrational and destructive bullshit, entirely regardless of which specific bigot is doing it or to whom.

    That’s why I made the distinction I made - IF a person poses a direct and measurable threat, then it can potentially be justified, but if a person merely happens to be of the same species as someone else who arguably poses a threat, it can not.