Is there a computerized logo comparison-o-matic?
February 28, 2006 2:48 PM Subscribe
Is it possible to perform computer analysis to determine the similarities and/or differences between two logos?
A friend of a friend designed a logo for a company. Another company recently sent a cease & desist letter on the basis that the two logos were too similar to one another (ostensibly causing a likelihood of confusion and therefore an infringement of the complaining company's trademark). The logo designer thinks they are sufficiently different, but wants some objective way to prove it.
In music copyright cases, I know there is a computer model that can take a song and break it down by various attributes (tempo, key, note and chord structure, verse and chorus structure, etc.) and analyze the similarities. So you could say "The magic computer says Song A is 53% similar to Song B."
My question: is there anything similar that compares pictures or logos or drawings? I'm envisioning something where you input both logos and it looks at colors, fonts, sizes, approximate shapes, etc. and spits out a calculation. I do not think such a thing exists, and think it would be hard to create such a thing with any reliable precision. Does anyone know of anything like what I am describing?
A friend of a friend designed a logo for a company. Another company recently sent a cease & desist letter on the basis that the two logos were too similar to one another (ostensibly causing a likelihood of confusion and therefore an infringement of the complaining company's trademark). The logo designer thinks they are sufficiently different, but wants some objective way to prove it.
In music copyright cases, I know there is a computer model that can take a song and break it down by various attributes (tempo, key, note and chord structure, verse and chorus structure, etc.) and analyze the similarities. So you could say "The magic computer says Song A is 53% similar to Song B."
My question: is there anything similar that compares pictures or logos or drawings? I'm envisioning something where you input both logos and it looks at colors, fonts, sizes, approximate shapes, etc. and spits out a calculation. I do not think such a thing exists, and think it would be hard to create such a thing with any reliable precision. Does anyone know of anything like what I am describing?
How about you post it here and we'll find so many logos that look the same as to make their claim seem stupid?
(I'm assuming the same designer wasn't involved)
posted by holloway at 3:51 PM on February 28, 2006
(I'm assuming the same designer wasn't involved)
posted by holloway at 3:51 PM on February 28, 2006
Are you sure it's neccesary to prove anything? IANAL, but the way I understand it, trademarks are supposed to apply only within the holder's specified category of trade. So if the two companies have different names and operate in different industries, then trademark infringement shouldn't be possible even if their logos are very similar. Might want to give a trademark lawyer a jingle. A quick initial phone consult should be cheap or free, and hopefully can save you you the time of defending against a frivolous claim.
posted by nakedcodemonkey at 3:55 PM on February 28, 2006
posted by nakedcodemonkey at 3:55 PM on February 28, 2006
This would be an extremely difficult problem to solve programmatically, as it would involve pattern recognition and aesthetic judgement.
posted by ook at 3:56 PM on February 28, 2006
posted by ook at 3:56 PM on February 28, 2006
No. In fact, I don't even think something like what you are describing exists for music.
Ultimately it would be up to a Judge or Jury anyway, regardless of anything your program could say..
posted by delmoi at 4:15 PM on February 28, 2006
Ultimately it would be up to a Judge or Jury anyway, regardless of anything your program could say..
posted by delmoi at 4:15 PM on February 28, 2006
Not that it would be particularly applicable here, but you can calculate Peak Signal-to-Noise Ratio to detect the differences between two images. However, this is typically used for evaluating the quality of lossy image compression, and not for evaluating aesthetics. It would be useful if they were claiming that a picture was EXACTLY the same as your friend's.
posted by i love cheese at 4:36 PM on February 28, 2006
posted by i love cheese at 4:36 PM on February 28, 2006
This is exactly the kind of thing that computers are very bad at.
posted by mr_roboto at 4:45 PM on February 28, 2006
posted by mr_roboto at 4:45 PM on February 28, 2006
An informal survey of people on the street might provide the sought after objective evidence - hand people a sheet with a bunch of logos next to each other, and ask them to put a line through the logo sets that are both the same company, and circle the logos that they thing are from different companies.
One of the sets is the two logos in question.
Have a couple of examples at the top using logos that people would recognise so they get the concept.
You might have to hire someone to do it though, or implement some other way to ward off accusations that the test was rigged by simply throwing out the sheets filled out by confused people.
posted by -harlequin- at 5:11 PM on February 28, 2006
One of the sets is the two logos in question.
Have a couple of examples at the top using logos that people would recognise so they get the concept.
You might have to hire someone to do it though, or implement some other way to ward off accusations that the test was rigged by simply throwing out the sheets filled out by confused people.
posted by -harlequin- at 5:11 PM on February 28, 2006
Best answer: I am trying to perform a similar analysis of "taste / style / artistic judgement" in a statistical machine learning class. I can let you know how it turns out, but yeah, this is an open area of CS research. Nobody knows how to do this. Computer models of analogy are pretty weak at the moment, so it's hard to even know what an answer to this question might look like.
In your music comparison sample, there are metrics that are decomposable for a song. For a piece of art, or, more generally, some "designed' thing, the metrics are much harder to come by. One needs to operationalize the definitions of things like "unity", "dominance", "symmetry", etc. We're trying to do this for our system, but it's more of an art than a science at this point. But I bet your magic-music-analyzer also is taking liberties with things like its iternal model of "verse and chorus structure" and the like.
posted by zpousman at 5:22 PM on February 28, 2006
In your music comparison sample, there are metrics that are decomposable for a song. For a piece of art, or, more generally, some "designed' thing, the metrics are much harder to come by. One needs to operationalize the definitions of things like "unity", "dominance", "symmetry", etc. We're trying to do this for our system, but it's more of an art than a science at this point. But I bet your magic-music-analyzer also is taking liberties with things like its iternal model of "verse and chorus structure" and the like.
posted by zpousman at 5:22 PM on February 28, 2006
Archaeologists have been using Hough transformations and Fourier Transformations for some time in an effort to objectively classify artifacts, including images of artifacts. Yes, this may be exactly what computers are worst at! But anyway, you can look at this or this (PDF) for the general idea which would be applicable to logo or other simple graphic comparisions. My guess is, it isn't worth it!
posted by Rumple at 5:45 PM on February 28, 2006
posted by Rumple at 5:45 PM on February 28, 2006
Best answer: Hough transformations may be useful for classification, but are only useful for extracting structure: You need to do more (very hard) stuff beyond that. Very simple conceptual shifts (like mirroring) change the parameters of the image elements in big ways. At the moment, I'd say you're stuck using humans, but if you're interested in the topic, have a look at Hofstadter's work on Letter Spirit, which is an excellent demonstration of how hard this is (but on a slightly simpler domain).
posted by fvw at 1:57 AM on March 1, 2006
posted by fvw at 1:57 AM on March 1, 2006
Is it possible to perform computer analysis to determine the similarities and/or differences between two logos?
Maybe.
I'm assuming that you would want to use that analysis to help defend your friend's friend's logo in a trademark suit. So your real quesstions is, "Assuming this software exists, are the results of that comparison relevant in a trademark suit?"
The answer is "no." It doesn't matter how good your software is. "Likelihood of confusion" is a term of art in trademark law, not just a measure of how similar or different the logos are. Its origin is in section 2(d) of the Lanham Act. 15 U.S.C. 1052(d).
In a trademark infringement lawsuit or an opposition or cancellation proceeding in the United States Patent and Trademark Office (PTO), the court or trademark examiner will use a test evaluating anywhere from 7-13 factors to determine likelihood of confusion. The test varies by jurisdiction, but is generally pretty similar.
For example, the United States Court of Appeals for the Federal Circuit, which handles appeals from the Trademark Trial and Appeal Board (which itself takes appeals from the decisions of trademark examiners) uses a 13 factor test, established by the Court of Customs and Patent Appeals in In re E.I. DuPont DeNemours & Co., 476 F.2d 1357, 1361 (C.C.P.A. 1973). For more on this, or examples of how the test is applied, Google "DuPont Factors".
Similarly, the U.S. Court of Appeals for the Second Circuit set out its eight factor test for likelihood of confusion in Polaroid Corp. v. Polarad Elecs. Corp., 287 F.2d 492, 495 (2d Cir. 1961). Google "Polaroid factors" for more examples of this in practice.
nakedcodemonkey has part of the story correct -- trademark protection is limited. However, "similarity or relatedness of the goods" is only one of the DuPont factors. The more similar the trademarks in question, the less similar the products need to be in order for the court fo find likelihood of confusion.
Survey evidence, as suggested by -harlequin-, is often used in these cases, but prepare for the court to scrutinize the survey methodology very closely, and possibly to ignore the results altogether. There is a huge line of cases within the trademark field just addressing the validity of surveys in this type of dispute.
[I am a lawyer. I am not your friend's friend's lawyer and you shouldn't rely on what I'm saying here as legal advice, etc. . . . You probably all hate to hear me say it, but the truth is that you should call a lawyer in this situation, unless your friend is willing to cease & desist].
posted by jewishbuddha at 2:45 AM on March 1, 2006
Maybe.
I'm assuming that you would want to use that analysis to help defend your friend's friend's logo in a trademark suit. So your real quesstions is, "Assuming this software exists, are the results of that comparison relevant in a trademark suit?"
The answer is "no." It doesn't matter how good your software is. "Likelihood of confusion" is a term of art in trademark law, not just a measure of how similar or different the logos are. Its origin is in section 2(d) of the Lanham Act. 15 U.S.C. 1052(d).
In a trademark infringement lawsuit or an opposition or cancellation proceeding in the United States Patent and Trademark Office (PTO), the court or trademark examiner will use a test evaluating anywhere from 7-13 factors to determine likelihood of confusion. The test varies by jurisdiction, but is generally pretty similar.
For example, the United States Court of Appeals for the Federal Circuit, which handles appeals from the Trademark Trial and Appeal Board (which itself takes appeals from the decisions of trademark examiners) uses a 13 factor test, established by the Court of Customs and Patent Appeals in In re E.I. DuPont DeNemours & Co., 476 F.2d 1357, 1361 (C.C.P.A. 1973). For more on this, or examples of how the test is applied, Google "DuPont Factors".
Similarly, the U.S. Court of Appeals for the Second Circuit set out its eight factor test for likelihood of confusion in Polaroid Corp. v. Polarad Elecs. Corp., 287 F.2d 492, 495 (2d Cir. 1961). Google "Polaroid factors" for more examples of this in practice.
nakedcodemonkey has part of the story correct -- trademark protection is limited. However, "similarity or relatedness of the goods" is only one of the DuPont factors. The more similar the trademarks in question, the less similar the products need to be in order for the court fo find likelihood of confusion.
Survey evidence, as suggested by -harlequin-, is often used in these cases, but prepare for the court to scrutinize the survey methodology very closely, and possibly to ignore the results altogether. There is a huge line of cases within the trademark field just addressing the validity of surveys in this type of dispute.
[I am a lawyer. I am not your friend's friend's lawyer and you shouldn't rely on what I'm saying here as legal advice, etc. . . . You probably all hate to hear me say it, but the truth is that you should call a lawyer in this situation, unless your friend is willing to cease & desist].
posted by jewishbuddha at 2:45 AM on March 1, 2006
Response by poster: Thanks for the answers everyone. They confirmed my suspicions.
posted by AgentRocket at 7:10 AM on March 1, 2006
posted by AgentRocket at 7:10 AM on March 1, 2006
No. In fact, I don't even think something like what you are describing exists for music.
Indeed. It would be much, much simpler to have a human break down the relevant statistics for two pieces of music than to program a computer to do it.
posted by ludwig_van at 7:33 AM on March 1, 2006
Indeed. It would be much, much simpler to have a human break down the relevant statistics for two pieces of music than to program a computer to do it.
posted by ludwig_van at 7:33 AM on March 1, 2006
This thread is closed to new comments.
A human?
posted by cillit bang at 3:51 PM on February 28, 2006