Is this logic proof "good enough"?
October 10, 2024 4:01 PM   Subscribe

Hello. I am trying to get a logic proof done for an imperative statement I am trying to make but am not a logician and thus asked chatGPT to do the proof for me (I am currently studying logic, however, and only just beginning) and wanted to see if someone could tell me if the proof is "good enough" to stand as true. Please see the extended explanation section for the proof that ChatGPT gave me.

Here is a screenshot (I am currently terrible with using photoshop so please bear with the messiness) and here is a "rich text" like document version of the same proof that chatGPT gave me (I included the screenshot because I figure it's easier to read than rich text version).
posted by dhahngh-dhahngh to Writing & Language (18 answers total) 3 users marked this as a favorite
 
are you perhaps taking 'logic' to an 'emotions fight'?
posted by j_curiouser at 4:24 PM on October 10 [9 favorites]


good enough? sure (you may appreciate sep's discussion of incompleteness in "proof theory")
posted by HearHere at 4:58 PM on October 10


Sorry, but the premise is off, right from the start: you can't infer a universal personal ethical duty like this that's applicable to all citizens, because the prior assumptions held by American citizens you're trying to 'prove' to have a duty aren't universal. 'All American citizens' simply do not care about black lives, womens rights, and so on. Far from it.

You could try to make a logical argument that that subset of citizens who accept your premises should also accept your conclusion, but you're proving very little, really, only that people who want to vote a certain way should vote that way. Well, you knew that already. It's that the premises are subjective statements of politics that limits the use of your proof.

(ChatGPT can't solve any of these problems since it's incapable of logical thinking).
posted by Fiasco da Gama at 5:02 PM on October 10 [12 favorites]


If this is for a homework assignment, then consider an admonishing finger shaken gently in your direction. If not, I think that, since you're currently studying Logic, this seems like a great thing to take to your teacher and ask for their opinion, and learn from the experience.

Meanwhile, I ran your proof through ChatGPT 4o on my own, and while I won't post the response here for fear of being banned, it said it was overall pretty good but could use some tweaks in framing and formatting. I find that when using ChatGPT for things like this, it helps to make it go back and forth a few times, with new instances, and take a few things like "On P7, are you sure that's accurate? Double check, please".
posted by The otter lady at 5:07 PM on October 10 [1 favorite]


I don't think this is an appropriate use of formal logic. You cannot logically prove normative statements (statements about what should be or what what people should do).

However, if you were intent on going down this doomed road, I would say that you need some extra premises:

You care only about these issues and not about any other issues OR You care so much about these issues that any marginal "betterness" in these issues, no matter how small, will outweigh any "betterness" in every other possible issue combined, no matter how great.

Your vote might possibly matter in deciding a winner.

You should (ah, here's a normative premise, which itself can never be logically proven) choose your vote based on the extent to which the candidate will provide "betterness" on whatever issues you care about, not based on any other factor.

Sorry, but the premise is off, right from the start: you can't infer a universal personal ethical duty like this that's applicable to all citizens, because the prior assumptions held by American citizens you're trying to 'prove' to have a duty aren't universal. 'All American citizens' simply do not care about black lives, womens rights, and so on. Far from it.

I don't think that's the goal (a universal ethical duty). I think the idea is not to persuade left-leaning people who are inclined to vote for candidates other than Trump or Harris because Harris doesn't quite align with their values or could be better somehow to vote for Harris instead.
posted by If only I had a penguin... at 6:01 PM on October 10 [2 favorites]


Chat GPT does not actually know what a logic proof is and can't do your homework for you. Ask Metafilter isn't for that, either. I recommend visiting your professor for office hours or your campus's tutoring center or trying to hire your own private tutor.
posted by hydropsyche at 6:09 PM on October 10 [10 favorites]


Response by poster: j_curiouser: no, I'm not. I'm simply trying to find proof for something that I find is inherently true but just don't have the knowledge yet to prove. And if I am incorrect then I am incorrect which is why I am asking on places where there might be people with expertise who could help me find the answers.

hydropscyhe: You are assuming that I go to college or can even afford college right now or have the academic standing to get scholarships all of which are untrue. I would like to learn how to do this myself or consult the advice others who have expertise but, as is self-evident from the subject matter of what I asked ChatGPT, this is a time sensitive issue.
posted by dhahngh-dhahngh at 6:49 PM on October 10 [1 favorite]


I'm simply trying to find proof for something that I find is inherently true

Proof is a weaker notion than truth. There are true things that are not amenable to logical proof.
posted by HiroProtagonist at 7:34 PM on October 10 [2 favorites]


The problem with a proof here, if you're looking to use it to convince someone else, is that valid proofs are true within systems where all of their premises are accepted as true. If you're dealing with people who don't accept even one of these premises, then using truth-based logic systems as a way to convince or make an argument falls apart before even beginning.
posted by augustimagination at 7:51 PM on October 10 [11 favorites]


You cannot logically prove normative statements (statements about what should be or what what people should do).

Sure you can. All you need to do is make sure that at least one the premises you feed into your argument is itself normative.

Example:

Premise 1: The structure of the American electoral system is such that one of Harris or TFG will be the next US President; no other candidate has any chance of winning the election. (non-normative)

Premise 2: For every issue upon which Harris has been shown to support an inhumane or otherwise morally objectionable position, the public record contains multiple instances of TFG supporting a more inhumane, more morally objectionable variant of the same position. (non-normative)

Premise 3: Americans ought to vote in such a way as to minimize the inhumanity and moral turpitude of the President who can claim a mandate to act in their name. (normative)

Conclusion: Americans ought to vote for Harris. (normative)

The trouble with attempting to use any kind of formal deductive reasoning to decide which political candidate to support is not that the logic is in any way difficult, it's that the premises are always dubious and deductive logic doesn't deal well with the resulting fuzziness.

If the moral turpitude of both candidates on some particular issue (e.g. Palestine, deportations, support for fossil fuels) is so egregious that the thought of supporting either leaves you kind of nauseous, and you're looking for some kind of dispassionate reasoning process to help you decide which way to vote, you might be better off using a weighted sum of scores instead of a formal reasoning chain. You might find a spreadsheet to be a convenient way to deal with the repetitious arithmetic involved.

Identify a specific collection of issues you care about and assign each one a numeric weighting that reflects how much you care about it, from 0 (not at all) to 10 (passionately). These are just weightings, not rankings; assess each issue independently, and if you care the same about two of them, assign those the same weighting.

Start each candidate with an overall score of 0.

Then, for each issue in turn:
  • Look through the public record for statements that each of the candidates has made on that issue, and assign each candidate an issue-specific score from -5 (could not imagine a worse position) through 0 (meh) to +5 (could not agree more).
  • Multiply each candidate's issue-specific score by your weighting for that issue, then add the result to that candidate's overall score.
If the overall scores for both candidates come out equal, don't bother to vote. Otherwise, vote for the one with the higher overall score.
posted by flabdablet at 7:51 PM on October 10 [4 favorites]


The thing about formal, mathematical logic is that for a proof to be solid, it needs to address very, very, very narrow assertions that can either be assigned truth values (axioms) or be proven to have a truth value based on the truth values of previous assertions . "2 is not equal to 1" is a narrow assertion. "People love numbers" is not narrow; there's endless human variety with respect to emotions, and the statement isn't even unambiguous about whether we're talking about some or all people, or some or all numbers, or what kind of love and how often it needs to be felt. You can still definitely treat "people love numbers" as an axiom and declare it true or false and build a whole proof on top of that - but that won't make the proof's conclusions "true" in real life. For another example, I could declare that "All humans turn green at age 35" is true, and then say "dhahngh-dhahngh is 35", and thus "prove" that "dhahngh-dhahngh has turned green". That would be a sound logical proof. But the assertion that the whole proof rests on is not actually true in this world, so the proof's conclusion is, let's say, not convincing for anyone familiar with this world. It would only be convincing to someone who agreed the assertion that people turn green at age 35 was true.

Both the assertions you want to base your proof on and the conclusions you want to reach are the opposite of narrow. And even if you break each one of them down into hundreds of narrow components, it will always, always be possible for someone to argue that the assertions/facts you use to try to prove each of them are false or incomplete and therefore your conclusions are false or incomplete. Hell, someone could argue that electing Trump would be good because him failing to keep his economic promises will sour people on Republicans for generations and that will lead to a global human rights revolution just in time to prevent climate change and other disasters, which will more than offset the loss of life a Trump presidency will cause!

How do you argue conclusively against a hypothetical scenario like that? You can't.

What you need here is not a proof from the world of mathematical logic. What you need is a persuasive essay, blog post, oral argument, whatever, that takes into account your audience's feelings and knowledge about your assertions and tries to convince them in ways that speak to them. There is no way to speak to everybody - everyone is convinced by different approaches and repelled by different approaches. For all you know somebody might find you convincing or the opposite because you remind them of their 5th grade art teacher's brother. But there are very few people in the world who are going to be convinced by a truth table or mathematical proof about things like this - and those who are sufficiently logic-oriented to be open to logical arguments and not put off by a mathematical proof are probably going to call bullshit on your proof, because there is no way for a set of hugely broad assertions and conclusions like that to be "proven" without giant logical leaps.

That doesn't mean it's not good to apply logical thinking to the persuasive arguments you do make. Trying to avoid fallacies is important if you care about truth. But that's for you to apply to your own thinking, and possibly to explain to others with words and examples and so on - not with logical symbols.


I ran your proof through ChatGPT 4o on my own, and while I won't post the response here for fear of being banned, it said it was overall pretty good but could use some tweaks in framing and formatting

ChatGPT has no ability to follow logic - literally none - so it can tell you a proof is good or bad and it's equally meaningless either way. This is not what ChatGPT is for. It is exactly what it's worst at.

ChatGPT is like a kid who's seen a bunch of proofs and doesn't understand them but can bullshit through an imitation of their basic form. That's all.
posted by trig at 8:09 PM on October 10 [12 favorites]


I think the real question here is what do you mean by "good enough"? I initially assumed like hydropsyche that this was some sort of school assignment. Since it isn't that I'm now guessing its some attempt at persuading someone somewhere, or just an exercise for your own enjoyment.

If you're trying to persuade someone else with this logical argument, only they can tell you if it's good enough. If you're doing it for yourself only you can decide if its good enough. If you care what I think (and I'm not suggesting that you should, you'd be in good company if you didn't :-), I agree with your conclusion, and I think your overall argument is sound, but I think attempting to frame it as an exercise in formal logic is probably a bit misguided.
posted by Reverend John at 8:17 PM on October 10 [1 favorite]


Best answer: What follows is a question, some commentary on the ChatGPT proof, a bit of general advice, and some nuances.

What do you want your proof to do? That is, when you ask whether your proof is good enough, I want to know what you are hoping to do with the proof. That sets the success conditions, which then determine whether the proof is good enough. Maybe you want your proof to be deductively valid (in classical propositional logic). Great! I can check your proof for validity and teach you to do the same, if you have time and inclination. As it turns out, your proof is not valid. I'll show you why in just a second, but for now, notice that if that's right, then your proof is not good enough by that standard. But maybe you were aiming at something else. Maybe you want your proof to be rhetorically effective. Great! Now I have no idea whether your proof is good enough, and I don't have a mechanical way of checking. But you can check empirically by presenting it to your intended audience or to a similar-enough audience and seeing whether they find it convincing. Maybe you have some other standard of goodness in mind. Saying whether your proof is good enough will be sensitive to your aim.

Now some commentary. First, some of the translations don't look right. ChatGPT is trying to render P6, which reads, "Ensuring the defeat of Donald Trump is critical for the protection of Black lives, women's rights, LGBTQ rights, and democracy," as T -> (B ^ W ^ L ^ D), where T = "Donald Trump is antithetical to Black lives, women's rights, LGBTQ rights, and democracy," and (B ^ W ^ L ^ D) = "You care about Black lives AND you care about women's rights AND you care about LGBTQ rights AND you care about the preservation of democracy." But the formal sentence T -> (B ^ W ^ L ^ D) should be rendered as "If T, then (B ^ W ^ L ^ D)," which would come out like: "If ensuring the defeat of Donald Trump is critical for the protection of Black lives, women's rights, LGBTQ rights, and democracy, then you care about Black lives AND you care about women's rights AND you care about LGBTQ rights AND you care about the preservation of democracy." To me, that doesn't look equivalent (or really even close) to "Ensuring the defeat of Donald Trump is critical for the protection of Black lives, women's rights, LGBTQ rights, and democracy."

Second, even if the translations were all okay, the proof itself wouldn't be valid. What it initially calls a "step by step proof" is just a list of the premisses. The second run proof doesn't actually terminate in the conclusion, so I assume that the conclusion is supposed to be the very next step. But the conclusion is a sentence with the sentence letter E in the consequent of a conditional, and the letter E never appeared previously in the proof. So, let all of the other sentence letters (other than E) have the truth-value TRUE. Since there are no negation operators in your proof, all of the premisses of the proof will be TRUE, and also the antecedent of the conclusion will be TRUE. Then let E have the truth-value FALSE. That is, we suppose we are in a possible world in which all of the sentence letters are TRUE except for letter E, which is FALSE. In that world, all of the premisses of the proof are TRUE, and the conclusion of the proof is FALSE (since the conclusion is a conditional that has a TRUE antecedent and a FALSE consequent). Therefore, the proof is not valid.

Now two bits of general advice. First, it's very common for people to put some of their evidential support (clauses that ought to be premisses) in their conclusion after a "because" connective. You've done that in your conclusion. But that distorts the proof that you need to give. By putting the "because" into the conclusion, you make the proof about the relation of logical or evidential support. That is, you're now making an argument that some conclusion holds because (or in virtue of) some evidence being the case. But what you really wanted to do is to use the evidence to show that the conclusion holds. So, I would simplify what you've written as your conclusion to something like this: "Despite Harris' flaws, it is the duty of all American citizens to ensure that Harris wins the 2024 election."

Second, when constructing proofs like yours, start with a kind of "top level" proof and proceed by unfolding. That is, start with something in a really simple form, such as modus ponens. In your case, you want an argument like this:

[A1] BLAH
[A2] If [A1], then despite Harris' flaws, it is the duty of all American citizens to ensure that Harris wins the 2024 election.
----------------------------------
[A3] Despite Harris' flaws, it is the duty of all American citizens to ensure that Harris wins the 2024 election.

Now, ask yourself what substitutes for BLAH that is itself plausible and that makes [A2] plausible as well. This might be as simple as, "Trump and Harris are the only two viable candidates, and Trump is significantly worse than Harris with respect to every important policy position." Or you might be more elaborate. Either way, the next question is whether the argument in that level of detail meets your criteria for being good enough. If it does, you're finished! If it doesn't try unfolding the argument another layer by asking what reasons you have for accepting each of the premisses in your top level argument. Then construct sub-arguments. Here's an example using the logical form called hypothetical syllogism:

[B1] If [A1], then the most important thing is to prevent Trump from winning the election.
[B2] If the most important thing is to prevent Trump from winning the election, then [A3].
-------------------------------
[A2] If [A1], then [A3].

And finally a bit of nuance. I generally agree with flabdablet's remarks on normative and non-normative claims in proofs. However, there are some subtleties. For one thing, the propositional logic that we're working with here (and in ChatGPT's "proof") assumes that the simple (or atomic) sentences are all declarative mood, present tense sentences, each of which may be assigned a truth-value of either TRUE or FALSE and that the assignments are independent of each other (that is, any two atomic sentences could both be true, both be false, or one false and the other true in either order). There is a difficult issue having to do with normative sentences. It's not obvious that they are truth apt. That is, it's not obvious that they can be assigned a truth value. If, for example, a normative claim such as "Murder is wrong" simply expresses an emotion or preference-like attitude, such as "Boo to murder!" it might not be truth-evaluable. And then it won't make sense to represent such sentences in classical propositional logic. For another thing, you initially suggested that you want to derive an imperative. If so, then you need to work with an imperative logic, which has different features from the classical propositional logic. But your stated conclusion is not an imperative, it's a declarative sentence with what appears to be a normatively-freighted term "duty." An imperative form might look like, "Ensure Harris wins, despite her flaws!" You could also frame your conclusion with a deontic modal operator such as "should" or "ought," as in "All Americans ought to ensure that Harris wins." That rendering wouldn't be an imperative, but also, if you really care about the logical structure, you'll probably want to put it into a deontic logic, where you can deal explicitly with the deontic modality, rather than leaving it in propositional logic where the modal will sit invisibly inside a simple sentence letter.
posted by Jonathan Livengood at 9:22 PM on October 10 [20 favorites]


Flagged Jonathan Livengood's answer as fantastic, by which of course I meant outstandingly excellent, not fanciful.
posted by flabdablet at 9:37 PM on October 10 [2 favorites]


I think the problem with this:

Sure you can. All you need to do is make sure that at least one the premises you feed into your argument is itself normative.

If you want proof to equal truth, is that the premise cannot be proven to be true (except by the same trick, but then we're just faced with the same problem ad infinitum). It seems that the premises have to be facts in order to be able to claim that a proof leads to a truth.

That said, as I'm sure is clear in comparing my answers to others, I'm no philospher or logician, just a regular person who considers herself capable of making logical arguments, but Dunning-Krueger, so who knows.
posted by If only I had a penguin... at 9:59 PM on October 10


Response by poster: First of all, thank you so much for all the answers, everyone.

To clear up the questions that many of you asked, I asked this question because I want to be able to help people see who to support (if you are for at least Black lives, women's rights to bodily autonomy, LGBTQ rights, and the preservation of democracy) in as rational a way as possible and, from my understanding, find that what is literally logical is "better" than simply "rational" since logic follows mathematical rules, if I am not mistaken.

As far as how I want to help people see what is the most rational/logical choice in this election, I want to be able to ask influential people I know to spread the message of at least what is objectively true about the candidates in this election and the results of what will happen if each win.

As far as why I asked if the proof is "good enough", I asked that because, to my understanding, I would have to know all of the different forms of logic to come to a definitive answer.

I will carefully examine all of the comments here to help me make a more informed decision on what I do moving forward.
posted by dhahngh-dhahngh at 10:03 PM on October 10


It seems that the premises have to be facts in order to be able to claim that a proof leads to a truth.

Same applies to all deductive reasoning, normative or otherwise. The premises need well-defined truth values in order for there to be any point in reasoning with them.

Politics is concerned overwhelmingly with personal values. Facts are secondary, and complicated chains of reasoning based upon them almost never sway a political opinion.

Difficult political decisions - for example, whether or not to vote for a candidate with a record of full-throated support for a current and ongoing genocide when all viable candidates have exactly such a record - are difficult because of values conflicts between voter and candidate(s), not because the voter has any genuine difficulty in evaluating or reasoning about the facts.

The most common failure mode of political decision making is in finding the incumbent's position on some or other issue so overwhelmingly disgusting, or their performance in office on it so overwhelmingly disappointing, that the vote is either avoided altogether or given to the challenger on no better basis than that they are not the overwhelmingly awful incumbent.

It's that blinding sense of overwhelm that leads to the failure by entirely suppressing the question of whether or not the challenger is even more disgusting (which they frequently are). The failure occurs at the level of selecting premises upon which to reason, and no amount of 100% sound logic built on an inadequate set of premises can render it any less inadequate.

This is where the weighted score process can offer some value. If conscientiously applied, it can isolate that overwhelming sense of disgust by confining it to assigning a specific score to a specific candidate on a specific issue, which can stop it completely paralyzing all decision-making right out of the gate.
posted by flabdablet at 11:46 PM on October 10


It seems that the premises have to be facts in order to be able to claim that a proof leads to a truth.

The way that this difficulty is usually resolved in practice is for a proof to be used as a demonstration that a collection of premises with pre-agreed truth values leads logically to a conclusion whose truth value was not immediately obvious just by examining those premises.

As well as the truth values of the premises needing pre-agreement, so does the validity of the system of logic within which the proof is constructed. This is actually the main place where the use of formal logic for political decision making fails: most people simply do not have the training in formal logic required to distinguish reasoning that's formally fallacious from reasoning that isn't.

If you're trying to make a political argument that depends on formal and informal logic yielding different outcomes - especially if you are yourself not well versed in formal logic - you're on a hiding to nothing persuasion-wise.

Most people will see the weighted-scores decision-making process as valid once the procedure is explained, without any need for rigorous academic training.

Somewhat more creative users, on being presented with a weighted-scores evaluator in the form of a spreadsheet, will also see almost immediately how they can tweak the weights to make the process yield the outcome they'd already decided on beforehand, in order to construct a "scientific" rationalization for that outcome. As it turns out, the overwhelming majority of human decision-making actually works that way - we make the decision first and only then construct a rationale for it.
posted by flabdablet at 12:34 AM on October 11 [2 favorites]


« Older Pic of those two old guys wearing insensitive red...   |   Recommendation for a fasting center Newer »

You are not logged in, either login or create an account to post comments