What best explains the use of auto-discovery in API implementations?
A. It makes API Manager aware of API implementations and hence enables it to enforce policies
B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
C. It enables Anypoint Exchange to discover assets and makes them available for reuse
D. It enables Anypoint Analytics to gain insight into the usage of APIs
Explanation: Explanation
Correct Answer: It makes API Manager aware of API implementations and hence enables it
to enforce policies.
*****************************************
>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing
the deployed application to an API created on the platform.
>> API Management includes tracking, enforcing policies if you apply any, and reporting
API analytics.
>> Critical to the Autodiscovery process is identifying the API by providing the API name
and version.
References:
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
Traffic is routed through an API proxy to an API implementation. The API proxy is managed
by API Manager and the API implementation is deployed to a CloudHub VPC using
Runtime Manager. API policies have been applied to this API. In this deployment scenario,
at what point are the API policies enforced on incoming API client requests?
A.
At the API proxy
B.
At the API implementation
C.
At both the API proxy and the API implementation
D.
At a MuleSoft-hosted load balancer
At the API proxy
Explanation: Explanation
Correct Answer: At the API proxy
*****************************************
>> API Policies can be enforced at two places in Mule platform.
>> One - As an Embedded Policy enforcement in the same Mule Runtime where API
implementation is running.
>> Two - On an API Proxy sitting in front of the Mule Runtime where API implementation is
running.
>> As the deployment scenario in the question has API Proxy involved, the policies will be
enforced at the API Proxy.
Refer to the exhibit.
An organization uses one specific CloudHub (AWS) region for all CloudHub deployments.
How are CloudHub workers assigned to availability zones (AZs) when the organization's
Mule applications are deployed to CloudHub in that region?
A.
Workers belonging to a given environment are assigned to the same AZ within that region
B.
AZs are selected as part of the Mule application's deployment configuration
C.
Workers are randomly distributed across available AZs within that region
D.
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
An AZ is randomly selected for a Mule application, and all the Mule application's CloudHub workers are assigned to that one AZ
Explanation: Explanation
Correct Answer: Workers are randomly distributed across available AZs within that region.
*****************************************
>> Currently, we only have control to choose which AWS Region to choose but there is no
control at all using any configurations or deployment options to decide what Availability
Zone (AZ) to assign to what worker.
>> There are NO fixed or implicit rules on platform too w.r.t assignment of AZ to workers
based on environment or application.
>> They are completely assigned in random. However, cloudhub definitely ensures that
HA is achieved by assigning the workers to more than on AZ so that all workers are not
assigned to same AZ for same application.
: https://help.mulesoft.com/s/question/0D52T000051rqDj/one-cloudhub-aws-region-howcloudhub-
workers-are-assigned-to-availability-zones-azs-
Graphical user interface, application
Description automatically generated
Bottom of Form
Top of Form
An API with multiple API implementations (Mule applications) is deployed to both CloudHub and customer-hosted Mule runtimes. All the deployments are managed by the MuleSoft-hosted control plane. An alert needs to be triggered whenever an API implementation stops responding to API requests, even if no API clients have called the API implementation for some time. What is the most effective out-of-the-box solution to create these alerts to monitor the API implementations?
A. Create monitors in Anypoint Functional Monitoring for the API implementations, where each monitor repeatedly invokes an API implementation endpoint
B. Add code to each API client to send an Anypoint Platform REST API request to generate a custom alert in Anypoint Platform when an API invocation times out
C. Handle API invocation exceptions within the calling API client and raise an alert from that API client when such an exception is thrown
D. Configure one Worker Not Responding alert.in Anypoint Runtime Manager for all API implementations that will then monitor every API implementation
Explanation:
In scenarios where multiple API implementations are deployed across
different environments (CloudHub and customer-hosted runtimes), Anypoint Functional
Monitoring is the most effective tool to monitor API availability and trigger alerts when an
API implementation becomes unresponsive. Here’s how it works:
How can the application of a rate limiting API policy be accurately reflected in the RAML definition of an API?
A.
By refining the resource definitions by adding a description of the rate limiting policy behavior
B.
By refining the request definitions by adding a remaining Requests query parameter with description, type, and example
C.
By refining the response definitions by adding the out-of-the-box Anypoint Platform ratelimit-
enforcement securityScheme with description, type, and example
D.
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
By refining the response definitions by adding the x-ratelimit-* response headers with
description, type, and example
Explanation: Explanation
Correct Answer: By refining the response definitions by adding the x-ratelimit-* response
headers with description, type, and example
*****************************************
Which two statements are true about the technology architecture of an Anypoint Virtual
Private Cloud (VPC)?
(Choose 2 answers)
A. Ports 8081 and 8082 are used
B. CIDR blacks are used
C. Anypoint VPC is responsible for load balancing the applications
D. Round-robin load balancing is used to distribute client requests across different applications
E. By default, HTTP requests can be made from the public internet to workers at port 6091
Explanation:
An Anypoint Virtual Private Cloud (VPC) provides a secure and private
networking environment for MuleSoft applications, using specific architectural elements:
What condition requires using a CloudHub Dedicated Load Balancer?
A.
When cross-region load balancing is required between separate deployments of the same Mule application
B.
When custom DNS names are required for API implementations deployed to customerhosted Mule runtimes
C.
When API invocations across multiple CloudHub workers must be load balanced
D.
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
When server-side load-balanced TLS mutual authentication is required between API
implementations and API clients
Explanation: Explanation
Correct Answer: When server-side load-balanced TLS mutual authentication is required
between API implementations and API clients
*****************************************
Fact/ Memory Tip: Although there are many benefits of CloudHub Dedicated Load
balancer, TWO important things that should come to ones mind for considering it are:
>> Having URL endpoints with Custom DNS names on CloudHub deployed apps
>> Configuring custom certificates for both HTTPS and Two-way (Mutual) authentication.
Coming to the options provided for this question:
>> We CANNOT use DLB to perform cross-region load balancing between separate
deployments of the same Mule application.
>> We can have mapping rules to have more than one DLB URL pointing to same Mule
app. But vicevera (More than one Mule app having same DLB URL) is NOT POSSIBLE
>> It is true that DLB helps to setup custom DNS names for Cloudhub deployed Mule apps
but NOT true for apps deployed to Customer-hosted Mule Runtimes.
>> It is true to that we can load balance API invocations across multiple CloudHub workers
using DLB but it is NOT A MUST. We can achieve the same (load balancing) using SLB
(Shared Load Balancer) too. We DO NOT necessarily require DLB for achieve it.
So the only right option that fits the scenario and requires us to use DLB is when TLS
mutual authentication is required between API implementations and API clients.
Reference: https://docs.mulesoft.com/runtime-manager/cloudhub-dedicated-load-balancer
A retail company with thousands of stores has an API to receive data about purchases and
insert it into a single database. Each individual store sends a batch of purchase data to the
API about every 30 minutes. The API implementation uses a database bulk insert
command to submit all the purchase data to a database using a custom JDBC driver
provided by a data analytics solution provider. The API implementation is deployed to a
single CloudHub worker. The JDBC driver processes the data into a set of several
temporary disk files on the CloudHub worker, and then the data is sent to an analytics
engine using a proprietary protocol. This process usually takes less than a few minutes.
Sometimes a request fails. In this case, the logs show a message from the JDBC driver
indicating an out-of-file-space message. When the request is resubmitted, it is successful.
What is the best way to try to resolve this throughput issue?
A.
se a CloudHub autoscaling policy to add CloudHub workers
B.
Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C.
Increase the size of the CloudHub worker(s)
D.
Increase the number of CloudHub workers
Increase the number of CloudHub workers
Explanation: Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
>> API implementation uses a database bulk insert command to submit all the purchase
data to a database
>> JDBC driver processes the data into a set of several temporary disk files on the
CloudHub worker
>> Sometimes a request fails and the logs show a message indicating an out-of-file-space
message
Based on above details:
>> Both auto-scaling options does NOT help because we cannot set auto-scaling rules
based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages
and not due to some given error or disk space issues.
>> Increasing the number of CloudHub workers also does NOT help here because the
reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to
disk-space.
>> Moreover, the API is doing bulk insert to submit the received batch data. Which means,
all data is handled by ONE worker only at a time. So, the disk space issue should be
tackled on "per worker" basis. Having multiple workers does not help as the batch may still
fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of
the worker so that a new worker with more disk space will be provisioned.
Page 3 out of 19 Pages |
Mulesoft MCPA-Level-1 Exam Questions Home | Previous |