As part of an LMS evaluation at a large research university, I needed to build a Postman testing framework that would let the whole team validate API behaviour across three platforms: Blackboard, Canvas, and Brightspace. The constraint was that any developer on the project should be able to run it — not just the analysts who already knew the systems well.
Each platform presented a different challenge.
Blackboard: the platform with a Swagger file
Blackboard publishes its Learn API as an OpenAPI (Swagger) spec through the Anthology Developer Portal. This changes the setup significantly. Rather than manually building requests from documentation, you import the spec directly into Postman and get a complete, structured collection.
Authentication uses OAuth 2.0 with the client_credentials grant type. The flow is: POST to the token endpoint with Basic Auth (Client ID + Client Secret), receive an access_token, then include that token as a Bearer header on all subsequent requests. A short post-response script saves it to an environment variable automatically, so you never copy tokens manually:
// Parse the response body to get the access token.
const response = pm.response.json();
// Set the access token in the environment variables.
pm.environment.set('apiKey', \`Bearer $\{response.access_token}\`);Setting this at the collection level means all requests inherit the token without any manual steps. Combined with Postman environment variables for the base URL and credentials, the collection becomes portable — anyone can clone the repo, import the collection, set three variables, and start testing.
The full collection is available here: blackboard-learn-apis-postman-collection.
Canvas: good docs, with one thing worth knowing
Canvas doesn't publish a Swagger file, so the process was different: work through the API documentation manually, identify the endpoints relevant to the evaluation criteria, and build the collection by hand. The documentation is readable, but without a machine-readable spec you lose the ability to import and validate endpoint structure automatically.
One thing worth noting if you test Canvas locally using the open-source version: once you authenticate through the browser UI, API requests appear to go through without a Bearer token. This seems to be the session cookie carrying the authentication, which the API accepts alongside OAuth2. It didn't affect the evaluation testing itself, but it's worth being aware of if you're running a local instance — it can mask auth issues that would only surface in a real OAuth2 flow.
Brightspace: the documentation problem
Brightspace uses the Valence API, and it was the most difficult of the three to work with — not because the API is poorly designed, but because the documentation doesn't describe what its attributes mean.
The reference lists endpoints and shows the shape of responses, but individual fields carry no descriptions. When you're trying to chain endpoints — using a value returned from one request as an input to another — you're left inferring what each field actually represents. For a team running structured, reproducible tests under time pressure, that uncertainty adds up.
There's a broader point here about API documentation quality. A published spec tells you the structure; descriptions tell you the semantics. Without both, you spend time reverse-engineering intent rather than writing tests.
What the framework enabled
Across all three platforms, the goal was the same: everyone on the team runs the same set of endpoints, against the same criteria, at the same time. That consistency matters for a comparative evaluation — if different analysts test different subsets of endpoints, the results aren't comparable.
Designing for any developer — rather than just the analysts who already knew these platforms — also meant the framework would survive staff changes. A new hire joining mid-evaluation could get set up without needing a platform-specific walkthrough first.
The contrast between the three platforms is a reasonable proxy for API maturity. An OpenAPI spec isn't just convenient — it changes what tooling is possible and how quickly a new developer can become productive. That's worth considering when evaluating any platform that exposes an API.
— Karl