What Mule application can have API policies applied by
Anypoint Platform to the endpoint exposed by that Mule application?
A) A Mule application that accepts requests over HTTP/1.x
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option A
Explanation: Explanation
Correct Answer: Option A
*****************************************
>> Anypoint API Manager and API policies are applicable to all types of HTTP/1.x APIs.
>> They are not applicable to WebSocket APIs, HTTP/2 APIs and gRPC APIs
Reference: https://docs.mulesoft.com/api-manager/2.x/using-policies
A company deploys Mule applications with default configurations through Runtime Manager to customer-hosted Mule runtimes. Each Mule application is an API implementation that exposes RESTful interfaces to API clients. The Mule runtimes are managed by the MuleSoft-hosted control plane. The payload is never used by any Logger components. When an API client sends an HTTP request to a customer-hosted Mule application, which metadata or data (payload) is pushed to the MuleSoft-hosted control plane?
A. Only the data
B. No data
C. The data and metadata
D. Only the metadata
A system API has a guaranteed SLA of 100 ms per request. The system API is deployed to a primary environment as well as to a disaster recovery (DR) environment, with different DNS names in each environment. An upstream process API invokes the system API and the main goal of this process API is to respond to client requests in the least possible time. In what order should the system APIs be invoked, and what changes should be made in order to speed up the response time for requests from the process API?
A. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment, and ONLY use the first response
B. In parallel, invoke the system API deployed to the primary environment and the system API deployed to the DR environment using a scatter-gather configured with a timeout, and then merge the responses
C. Invoke the system API deployed to the primary environment, and if it fails, invoke the system API deployed to the DR environment
D. Invoke ONLY the system API deployed to the primary environment, and add timeout and retry logic to avoid intermittent failures
Explanation: Explanation
Correct Answer: In parallel, invoke the system API deployed to the primary environment
and the system API deployed to the DR environment, and ONLY use the first response.
*****************************************
>> The API requirement in the given scenario is to respond in least possible time.
>> The option that is suggesting to first try the API in primary environment and then
fallback to API in DR environment would result in successful response but NOT in least
possible time. So, this is NOT a right choice of implementation for given requirement.
>> Another option that is suggesting to ONLY invoke API in primary environment and to
add timeout and retries may also result in successful response upon retries but NOT in
least possible time. So, this is also NOT a right choice of implementation for given
requirement.
>> One more option that is suggesting to invoke API in primary environment and API in DR
environment in parallel using Scatter-Gather would result in wrong API response as it
would return merged results and moreover, Scatter-Gather does things in parallel which is
true but still completes its scope only on finishing all routes inside it. So again, NOT a right
choice of implementation for given requirement
The Correct choice is to invoke the API in primary environment and the API in DR
environment parallelly, and using ONLY the first response received from one of them
An organization uses various cloud-based SaaS systems and multiple on-premises
systems. The on-premises systems are an important part of the organization's application
network and can only be accessed from within the organization's intranet.
What is the best way to configure and use Anypoint Platform to support integrations with
both the cloud-based SaaS systems and on-premises systems?
A) Use CloudHub-deployed Mule runtimes in an Anypoint VPC managed by Anypoint
Platform Private Cloud Edition control plane
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option B
Explanation: •Explanation
Correct Answer: Use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
*****************************************
Key details to be taken from the given scenario:
>> Organization uses BOTH cloud-based and on-premises systems
>> On-premises systems can only be accessed from within the organization's intranet
Let us evaluate the given choices based on above key details:
>> CloudHub-deployed Mule runtimes can ONLY be controlled using MuleSoft-hosted
control plane. We CANNOT use Private Cloud Edition's control plane to control CloudHub
Mule Runtimes. So, option suggesting this is INVALID
>> Using CloudHub-deployed Mule runtimes in the shared worker cloud managed by the
MuleSoft-hosted Anypoint Platform is completely IRRELEVANT to given scenario and silly
choice. So, option suggesting this is INVALID
>> Using an on-premises installation of Mule runtimes that are completely isolated with NO
external network access, managed by the Anypoint Platform Private Cloud Edition control
plane would work for On-premises integrations. However, with NO external access,
integrations cannot be done to SaaS-based apps. Moreover CloudHub-hosted apps are
best-fit for integrating with SaaS-based applications. So, option suggesting this is BEST
WAY.
The best way to configure and use Anypoint Platform to support these mixed/hybrid
integrations is to use a combination of CloudHub-deployed and manually provisioned onpremises
Mule runtimes managed by the MuleSoft-hosted Platform control plane.
A Platform Architect inherits a legacy monolithic SOAP-based web service that performs a number of tasks, including showing all policies belonging to a client. The service connects to two back-end systems — a life-insurance administration system and a general-insurance administration system — and then queries for insurance policy information within each system, aggregates the results, and presents a SOAP-based response to a user interface (UI). The architect wants to break up the monolithic web service to follow API-led conventions. Which part of the service should be put into the process layer?
A. Combining the insurance policy information from the administration systems
B. Presenting the SOAP-based response to the UI
C. Authenticating and maintaining connections to each of the back-end administration systems
D. Querying the data from the administration systems
Explanation:
In the API-led connectivity approach, each layer (System, Process, and
Experience) has a distinct purpose:
Refer to the exhibit.
what is true when using customer-hosted Mule runtimes with the MuleSoft-hosted Anypoint Platform control plane (hybrid deployment)?
A.
Anypoint Runtime Manager initiates a network connection to a Mule runtime in order to deploy Mule applications
B.
The MuleSoft-hosted Shared Load Balancer can be used to load balance API
invocations to the Mule runtimes
C.
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
D.
Anypoint Runtime Manager automatically ensures HA in the control plane by creating a new Mule runtime instance in case of a node failure
API implementations can run successfully in customer-hosted Mule runtimes, even when they are unable to communicate with the control plane
Explanation: Explanation
Correct Answer: API implementations can run successfully in customer-hosted Mule
runtimes, even when they are unable to communicate with the control plane.
*****************************************
>> We CANNOT use Shared Load balancer to load balance APIs on customer hosted
runtimes
A company has started to create an application network and is now planning to implement a Center for Enablement (C4E) organizational model. What key factor would lead the company to decide upon a federated rather than a centralized C4E?
A.
When there are a large number of existing common assets shared by development teams
B.
When various teams responsible for creating APIs are new to integration and hence need extensive training
C.
When development is already organized into several independent initiatives or groups
D.
When the majority of the applications in the application network are cloud based
When development is already organized into several independent initiatives or groups
Explanation: Explanation
Correct Answer: When development is already organized into several independent
initiatives or groups
*****************************************
>> It would require lot of process effort in an organization to have a single C4E team
coordinating with multiple already organized development teams which are into several
independent initiatives. A single C4E works well with different teams having at least a
common initiative. So, in this scenario, federated C4E works well instead of centralized
C4E.
A system API is deployed to a primary environment as well as to a disaster recovery (DR)
environment, with different DNS names in each environment. A process API is a client to
the system API and is being rate limited by the system API, with different limits in each of
the environments. The system API's DR environment provides only 20% of the rate limiting
offered by the primary environment. What is the best API fault-tolerant invocation strategy
to reduce overall errors in the process API, given these conditions and constraints?
A.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
B.
Invoke the system API deployed to the primary environment; add retry logic to the process
API to handle intermittent failures by invoking the system API deployed to the DR
environment
C.
In parallel, invoke the system API deployed to the primary environment and the system API
deployed to the DR environment; add timeout and retry logic to the process API to avoid
intermittent failures; add logic to the process API to combine the results
D.
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke a copy of the process API
deployed to the DR environment
Invoke the system API deployed to the primary environment; add timeout and retry logic to
the process API to avoid intermittent failures; if it still fails, invoke the system API deployed
to the DR environment
Explanation: Explanation
Correct Answer: Invoke the system API deployed to the primary environment; add timeout
and retry logic to the process API to avoid intermittent failures; if it still fails, invoke the
system API deployed to the DR environment
*****************************************
There is one important consideration to be noted in the question which is - System API in
DR environment provides only 20% of the rate limiting offered by the primary environment.
So, comparitively, very less calls will be allowed into the DR environment API opposed to
its primary environment. With this in mind, lets analyse what is the right and best faulttolerant
invocation strategy.
1. Invoking both the system APIs in parallel is definitely NOT a feasible approach because
of the 20% limitation we have on DR environment. Calling in parallel every time would
easily and quickly exhaust the rate limits on DR environment and may not give chance to
genuine intermittent error scenarios to let in during the time of need.
2. Another option given is suggesting to add timeout and retry logic to process API while
invoking primary environment's system API. This is good so far. However, when all retries
failed, the option is suggesting to invoke the copy of process API on DR environment which
is not right or recommended. Only system API is the one to be considered for fallback and
not the whole process API. Process APIs usually have lot of heavy orchestration calling
many other APIs which we do not want to repeat again by calling DR's process API. So this
option is NOT right.
3. One more option given is suggesting to add the retry (no timeout) logic to process API to
directly retry on DR environment's system API instead of retrying the primary environment
system API first. This is not at all a proper fallback. A proper fallback should occur only
after all retries are performed and exhausted on Primary environment first. But here, the
option is suggesting to directly retry fallback API on first failure itself without trying main
API. So, this option is NOT right too.
This leaves us one option which is right and best fit.
- Invoke the system API deployed to the primary environment
- Add Timeout and Retry logic on it in process API
- If it fails even after all retries, then invoke the system API deployed to the DR
environment.
Page 1 out of 19 Pages |