A Platform Architect inherits a legacy monolithic SOAP-based web service that performs a number of tasks, including showing all policies belonging to a client. The service connects to two back-end systems — a life-insurance administration system and a general-insurance administration system — and then queries for insurance policy information within each system, aggregates the results, and presents a SOAP-based response to a user interface (UI). The architect wants to break up the monolithic web service to follow API-led conventions. Which part of the service should be put into the process layer?
A. Combining the insurance policy information from the administration systems
B. Presenting the SOAP-based response to the UI
C. Authenticating and maintaining connections to each of the back-end administration systems
D. Querying the data from the administration systems
Explanation:
In the API-led connectivity approach, each layer (System, Process, and
Experience) has a distinct purpose:
A System API is designed to retrieve data from a backend system that has scalability challenges. What API policy can best safeguard the backend system?
A.
IPwhitelist
B.
SLA-based rate limiting
C.
Auth 2 token enforcement
D.
Client ID enforcement
SLA-based rate limiting
Explanation: Explanation
Correct Answer: SLA-based rate limiting
*****************************************
>> Client Id enforement policy is a "Compliance" related NFR and does not help in
maintaining the "Quality of Service (QoS)". It CANNOT and NOT meant for protecting the
backend systems from scalability challenges.
>> IP Whitelisting and OAuth 2.0 token enforcement are "Security" related NFRs and again
does not help in maintaining the "Quality of Service (QoS)". They CANNOT and are NOT
meant for protecting the backend systems from scalability challenges.
Rate Limiting, Rate Limiting-SLA, Throttling, Spike Control are the policies that are "Quality
of Service (QOS)" related NFRs and are meant to help in protecting the backend systems
from getting overloaded.
https://dzone.com/articles/how-to-secure-apis
An API with multiple API implementations (Mule applications) is deployed to both CloudHub and customer-hosted Mule runtimes. All the deployments are managed by the MuleSoft-hosted control plane. An alert needs to be triggered whenever an API implementation stops responding to API requests, even if no API clients have called the API implementation for some time. What is the most effective out-of-the-box solution to create these alerts to monitor the API implementations?
A. Create monitors in Anypoint Functional Monitoring for the API implementations, where each monitor repeatedly invokes an API implementation endpoint
B. Add code to each API client to send an Anypoint Platform REST API request to generate a custom alert in Anypoint Platform when an API invocation times out
C. Handle API invocation exceptions within the calling API client and raise an alert from that API client when such an exception is thrown
D. Configure one Worker Not Responding alert.in Anypoint Runtime Manager for all API implementations that will then monitor every API implementation
Explanation:
In scenarios where multiple API implementations are deployed across
different environments (CloudHub and customer-hosted runtimes), Anypoint Functional
Monitoring is the most effective tool to monitor API availability and trigger alerts when an
API implementation becomes unresponsive. Here’s how it works:
What is true about where an API policy is defined in Anypoint Platform and how it is then applied to API instances?
A.
The API policy Is defined In Runtime Manager as part of the API deployment to a Mule
runtime, and then ONLY applied to the specific API Instance
B.
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
C.
The API policy Is defined in API Manager and then automatically applied to ALL API instances
D.
The API policy is defined in API Manager, and then applied to ALL API instances in the
specified environment
The API policy Is defined In API Manager for a specific API Instance, and then ONLY
applied to the specific API instance
Explanation: Explanation
Correct Answer: The API policy is defined in API Manager for a specific API instance, and
then ONLY applied to the specific API instance.
*****************************************
>> Once our API specifications are ready and published to Exchange, we need to visit API
Manager and register an API instance for each API.
>> API Manager is the place where management of API aspects takes place like
addressing NFRs by enforcing policies on them.
>> We can create multiple instances for a same API and manage them differently for
different purposes.
>> One instance can have a set of API policies applied and another instance of same API
can have different set of policies applied for some other purpose.
>> These APIs and their instances are defined PER environment basis. So, one need to
manage them seperately in each environment.
>> We can ensure that same configuration of API instances (SLAs, Policies etc..) gets
promoted when promoting to higher environments using platform feature. But this is
optional only. Still one can change them per environment basis if they have to.
>> Runtime Manager is the place to manage API Implementations and their Mule Runtimes
but NOT APIs itself. Though API policies gets executed in Mule Runtimes, We CANNOT
enforce API policies in Runtime Manager. We would need to do that via API Manager only
for a cherry picked instance in an environment.
So, based on these facts, right statement in the given choices is - "The API policy is
defined in API Manager for a specific API instance, and then ONLY applied to the specific
API instance".
Reference: https://docs.mulesoft.com/api-manager/2.x/latest-overview-concept
Which component monitors APIs and endpoints at scheduled intervals, receives reports about whether tests pass or fail, and displays statistics about API and endpoint performance?
A. API Analytics
B. Anypoint Monitoring dashboards
C. APT Functional Monitoring
D. Anypoint Runtime Manager alerts
Explanation:
Which statement is true about identity management and client management on Anypoint Platform?
A. If an external identity provider is configured, the SAML 2.0 bearer tokens issued by the identity provider cannot be used for invocations of the Anypoint Platform web APIs
B. If an external client provider is configured, it must be configured at the Anypoint Platform organization level and cannot be assigned to individual business groups and environments
C. Anypoint Platform supports configuring one external identity provider
D. Both client management and identity management require an identity provider
Explanation:
Anypoint Platform allows organizations to integrate one external identity
provider (IdP) for identity and access management (IAM), supporting SSO and centralized
user authentication.
An API implementation is deployed to CloudHub.
What conditions can be alerted on using the default Anypoint Platform functionality, where
the alert conditions depend on the end-to-end request processing of the API
implementation?
A.
When the API is invoked by an unrecognized API client
B.
When a particular API client invokes the API too often within a given time period
C.
When the response time of API invocations exceeds a threshold
D.
When the API receives a very high number of API invocations
When the response time of API invocations exceeds a threshold
Explanation: Explanation
Correct Answer: When the response time of API invocations exceeds a threshold
*****************************************
>> Alerts can be setup for all the given options using the default Anypoint Platform
functionality
>> However, the question insists on an alert whose conditions depend on the end-to-end
request processing of the API implementation.
>> Alert w.r.t "Response Times" is the only one which requires end-to-end request
processing of API implementation in order to determine if the threshold is exceeded or not.
Reference: https://docs.mulesoft.com/api-manager/2.x/using-api-alerts
How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform ratelimit-
enforcement securityScheme with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
Explanation: Explanation
Correct Answer: By refining the response definitions by adding the x-ratelimit-* response
headers with description, type, and example
*****************************************
Page 2 out of 19 Pages |
Mulesoft MCPA-Level-1 Exam Questions Home |