Programmatically validating an informal API
May 9, 2018 3:32 PM Subscribe
I need to create an automated solution for ensuring several client and server software modules of various versions can successfully interoperate. They don't have a formally specified API, so what are the best approaches?
Assume the existence of a client C with versions C₁…Cₓ and a server S with versions S₁…Sₓ. We'd like to validate which version combinations of client and server can successfully communicate. If we were starting from scratch, we'd likely use something like RAML or gRPC to generate the client/server interface stubs and enforce the API in code, but that's not an option here as previous versions' code can't be modified. Also the actual API is fairly ad hoc and can't be easily described by these structured specifications.
The best approach I can think of so far is to create a simulator that can act as either client or server, but shares the same code to ensure that it can't get out of sync and then write some automation to exercise the various versions of the real client and server code against it.
Does this seem like a reasonable approach? Can you point me towards any examples of applying something like this?
What other approaches might work?
Assume the existence of a client C with versions C₁…Cₓ and a server S with versions S₁…Sₓ. We'd like to validate which version combinations of client and server can successfully communicate. If we were starting from scratch, we'd likely use something like RAML or gRPC to generate the client/server interface stubs and enforce the API in code, but that's not an option here as previous versions' code can't be modified. Also the actual API is fairly ad hoc and can't be easily described by these structured specifications.
The best approach I can think of so far is to create a simulator that can act as either client or server, but shares the same code to ensure that it can't get out of sync and then write some automation to exercise the various versions of the real client and server code against it.
Does this seem like a reasonable approach? Can you point me towards any examples of applying something like this?
What other approaches might work?
Perhaps you have your use cases already well in hand, but since you describe the APIs as fairly ad hoc, seems like something could easily be missed. I'd want to be sure I have the functionality we care about, and I'd gain confidence in that by writing the important use cases and automating those.
posted by cyclicker at 9:41 PM on May 9, 2018
posted by cyclicker at 9:41 PM on May 9, 2018
This is very dependent on what data structures they're using, but OpenAPI (formerly Swagger) works with JSON and is a lot looser than some other definitions if you want to go the semi-formal route.
Honestly, I'd probably just set up test requests in a tool like SoapUI/Postman or the command-line version of the same as individual projects, and then run each project against each "version" of the API and tweak accordingly. Then you'd have an ad hoc regression test suite.
posted by mikeh at 7:17 AM on May 10, 2018
Honestly, I'd probably just set up test requests in a tool like SoapUI/Postman or the command-line version of the same as individual projects, and then run each project against each "version" of the API and tweak accordingly. Then you'd have an ad hoc regression test suite.
posted by mikeh at 7:17 AM on May 10, 2018
« Older How do I stop seeing my books as trophies? | Seeking gallery of video game menu screens Newer »
This thread is closed to new comments.
posted by Aleyn at 4:20 PM on May 9, 2018