Service SDK

Overview

The YaaS Service SDK for Java, or simply the Service SDK (Software Development Kit), supports you in developing your own YaaS services and integrating them with YaaS. The Service SDK makes use of several popular technologies like Spring, Jersey, OAuth, RAML, and JSON Schema. It also helps you work efficiently with YaaS RAML Patterns and to apply YaaS API Guidelines in your own implementation.

The Service SDK is based on Maven and is distributed by means of Maven artifacts. These are available in Maven's Central Repository and include binary JARs, JavaDocs, and sources. See the following diagram for an overview of the various Maven modules, plug-ins, archetypes, and POMs that make up the Service SDK.

Service SDK Maven modules

While the Service SDK focuses on Java development, it is not mandatory to use this SDK or even Java to develop YaaS services or interact with YaaS. You can use any other language of your choice if you prefer to have more freedom in service implementation.

You must have Java 8 or higher installed to make use of the YaaS Service SDK for Java.


Archetypes

The SDK provides Maven Archetypes for YaaS Services based on JEE, Spring, the Jersey framework, and Spring Boot. Development projects generated from these archetypes also have out-of-the-box support for the Service Generator and the API Console.

The currently available archetypes are:

Archteype Artifact IDDescription
service-sdk-jersey-spring-archetypeCreates a service based on JEE, Spring, and Jersey, and includes a sample API
service-sdk-jersey-spring-base-archetypeCreates a service based on JEE, Spring, and Jersey
service-sdk-spring-boot-archetypeCreates a service based on JEE, Spring, and Spring Boot

Two of the aforementioned archetypes are almost identical. The service-sdk-spring-boot-archetype is very similar to the service-sdk-jersey-spring-base-archetype because neither has a sample project attached to it. The only difference is that the service-sdk-spring-boot-archetype uses Spring Boot libraries. The service-sdk-jersey-spring-archetype is the only archetype that includes a sample API for CRUD operations with wishlist resources.

Archetype usage

To generate a new service from the service-sdk-jersey-spring-archetype archetype, use the following command:

mvn archetype:generate -U -DarchetypeGroupId="com.sap.cloud.yaas.service-sdk" -DarchetypeArtifactId=service-sdk-jersey-spring-archetype -DarchetypeVersion=RELEASE

After you perform the command, Maven prompts for several property values, which allow you to customize the new service project.

By default, the generated API is synchronous. To generate a new service with an asynchronous API, add the the -DasyncApi=true option:

mvn archetype:generate -U -DarchetypeGroupId="com.sap.cloud.yaas.service-sdk" -DarchetypeArtifactId=service-sdk-jersey-spring-archetype -DarchetypeVersion=RELEASE -DasyncApi=true

You can perform the generation for the two archetypes without the wishlist resources as shown:

For the Jersey Spring archetype:

mvn archetype:generate -U -DarchetypeGroupId="com.sap.cloud.yaas.service-sdk" -DarchetypeArtifactId=service-sdk-jersey-spring-base-archetype -DarchetypeVersion=RELEASE

For the Spring Boot archetype:

mvn archetype:generate -U -DarchetypeGroupId="com.sap.cloud.yaas.service-sdk" -DarchetypeArtifactId=service-sdk-spring-boot-archetype -DarchetypeVersion=RELEASE

You can read more about what Spring Boot Archetype contains in the documentation of the Spring Boot Starter library.

Archetype properties

Standard properties for all Maven archetypes:

Property NameDefault ValueDescription
groupIdnoneThe groupId of the new project. See Maven coordinates.
artifactIdnoneThe artifactId of the new project. See Maven coordinates.
version1.0-SNAPSHOTThe version of the new project. See Maven coordinates.
packageThe given groupIdThe package to contain all Java classes of the new project. Make sure to use a valid Java package name here.

Custom properties of the jersey-spring-archetype:

Property NameDefault ValueDescription
asyncApifalseSet to true to enable the asynchronous processing feature of the Service Generator. You can adjust this property later in the POM of the generated project.
serviceSdkVersionThe given archetypeVersionDetermines the version of the parent POM and several library dependences. This property can be useful for backward compatibility. You can adjust this value later in the POM of the generated project.

While Maven prompts for these properties interactively, it can be more convenient to pass them through the command line. Use additional arguments of the form -DpropertyName=propertyValue to do so.

The Maven archetype plug-in is very lax at validating the given property values. Therefore, project generation might succeed even if you supply invalid property values. This might lead to problems when you later build the project and run your new service. To check for such problems, change to the directory of the new project, which is based on its artifactId, and trigger a Maven build using the mvn clean install command.

CORS support

The jersey-spring-archetype template has Cross-Origin Resource Sharing (CORS) support automatically enabled using a filter definition in the web.xml file. It is configured to accept all cross-origin requests. Adjust them to your needs following the third-party CORS Filter Configuration Guide.

The current configuration defined in the web.xml file is shown:

    <filter>
        <filter-name>CORS</filter-name>
        <filter-class>com.thetransactioncompany.cors.CORSFilter</filter-class>
        <async-supported>true</async-supported>
        <!-- Enable CORS for REST HTTP methods -->
        <init-param>
            <param-name>cors.supportedMethods</param-name>
            <param-value>GET,PUT,POST,DELETE,HEAD,OPTIONS</param-value>
        </init-param>
    </filter>
The async-supported flag is activated for a potential asynchronous implementation of your API, but it also works for the traditional synchronous approach.


Super POM

Super POM files make project setup easier by using Maven parent POMs. When working on a large-scale development project, it is important to agree on a few standards. Some of these standards are typically configured in the root pom.xml file of a Maven-based project. The parent POM included in the SDK has these basic settings:

To use the POM in your service, add a parent POM definition to your root pom.xml file as shown in this example:

<parent>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-superpom</artifactId>
  <version>${service-sdk.version}</version>
  <relativePath/>
</parent>

Manifest configuration

It is a common Java standard to bundle a MANIFEST.MF file in the META-INF folder of each Java artifact, such as jar and war, to provide meta information about the artifact itself. When using Maven for artifact creation, a basic manifest is added automatically. However, it does not include some standard attributes that are useful for gathering the application name, version, or build time. The Super POM configures the maven-jar-plugin as well as the maven-war-plugin to add this information. The following attributes are added to the manifest using these Maven properties as the value:

Implementation-Title: ${project.name}
Implementation-Version: ${project.version}
Implementation-Vendor-Id: ${project.groupId}
Implementation-Vendor: ${project.organization.name}
Build-Time: ${maven.build.timestamp}

Using the Maven Eclipse Plug-in

Use the Maven Eclipse plug-in to generate an out-of-the-box Eclipse project setup, including the .project and .classpath files. The Maven Eclipse plug-in adds the generated sources in the target folder to the classpath. To generate the project, use this command:

    mvn eclipse:clean eclipse:eclipse

Or, if you prefer to create an Eclipse project that is ready to use with the M2Eclipse plug-in, use this command:

    mvn eclipse:clean eclipse:m2eclipse


Property placeholders

You can use and configure most of the libraries that the Service SDK provides with Spring, any other dependency injection library, or directly.

For convenience, a Spring configuration XML is provided out-of-the-box for most of the libraries, which can simplify their integration in a Spring-based application. These configuration files contain property placeholders for which it is common to use Spring's PropertyPlaceholderConfigurer to inject settings from external sources, such as environment variables, into your application context.

To activate this feature, include the following line in your Spring configuration:

<context:property-placeholder />

It is a good practice to enlist and define default values for all of the externalized properties in a properties file.

You can configure the PropertyPlaceholderConfigurer to check against the Java System properties. If it cannot find a property, it can also check in the specified properties files. To do this, use the following configuration:

<context:property-placeholder location="classpath:/default.properties,classpath*:test.properties"/>

When creating a service using the Jersey archetypes, this feature is already activated. For a service created using the Spring Boot archetype, you can find the properties in the application.properties file that is picked up by default. You can read more about externalizing properties in Spring Boot here.

For more information about customization, see the Spring website.


Aggregating Libraries

The SDK provides JAR-based libraries for writing REST-based services.

Libraries is a POM project that aggregates all libraries typically needed for YaaS development. To write a Core or Commerce service, add this single dependency to your project:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-libraries</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

This library provides a pre-defined Spring configuration but does not introduce Spring dependencies. It bundles all Spring configurations of the aggregated libraries into one configuration. If your application uses Spring, integrate the libraries to your application context by adding this import:

<import resource="classpath:/META-INF/libraries-spring.xml" />

The aggregating libraries include the following libraries:

  • Logging (service-sdk-logging)
  • Logging Filters (service-sdk-logging-filters)
  • Security (service-sdk-security)
  • Pattern Support (service-sdk-pattern-support)
  • Jersey Support (service-sdk-jersey-support)
  • Servlet Support (service-sdk-servlet-support)
  • API Console (service-sdk-api-console)


Spring Boot Starter Library

The Spring Boot starter library helps you to kickstart a Spring Boot YaaS service. It includes most of the Service SDK libraries and their configurations out-of-the-box.

To add the library to your project, add the following dependency to your pom.xml file:

<dependencies>
    <dependency>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-spring-boot-starter</artifactId>
        <version>${service-sdk.version}</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>repackage</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Features

The library provides the following features out-of-the-box:

The library also provides these Jersey-based features, which require Jersey on the classpath:

  • Automatic registration of your Jersey endpoints
  • Automatic registration of Jersey exception mappers
  • Automatic registration of Service SDK Jersey features
  • Jersey client configuration together with the OAuth2 Filter
  • ping endpoint

Jersey-based feature support

As long as you have Jersey on your classpath, the Jersey-related features are activated. Otherwise, the features are not active.

Configuration

To configure the libraries, add the following configuration values to the application.properties file:

Config ValueDescription
yaas.service.basic-auth.credentialsThe Basic Authorization credentials that the service allows. The expected format is username:password.
yaas.service.basic-auth.exclude-pathsTo exclude a path from BasicAuth, specify this value in a comma-separated format.
yaas.clients.oauth2.token-endpoint-uriSpecifies the token endpoint for OAuth2
yaas.clients.oauth2.client-idYour service's client ID, found in the Builder
yaas.clients.oauth2.client-secretYour service's client secret, found in the Builder

Jersey configuration

Config ValueDescription
yaas.clients.jersey.request-response-logging.logger-nameJersey logger to use, used in JerseyClientAutoConfiguration
yaas.service.jersey.request-response-logging.logger-nameJersey logger to use, used in LoggingFilterAutoConfiguration
yaas.service.jersey.enable-json-featureEnables or disables the JSON to DTO conversion. Accepted values are true and false. The default value is true.
yaas.service.jersey.enable-ping-resourceEnables or disables the ping endpoint. Accepted values are true and false. The default value is true.
yaas.service.jersey.enable-security-featureEnables or disables the SDK's Jersey Security feature. Accepted values are true and false. The default value is true.


Logging

YaaS provides infrastructure that monitors an incoming request across all applications in a consistent way. One aspect of the monitoring feature is the ability to analyze the logs of all the applications. To enable the infrastructure to collect all of the logs, all applications must log to the console. To enable the monitoring tool to aggregate all logs in a usable way, the logs of each application must follow a specific log pattern.

This library enables your application to be compliant with the stated requirements by defining a fixed logging configuration. It is based on SLF4J using logback. In addition, it introduces the following logging framework adapters without shipping the frameworks themselves:

  • log4j
  • Java logging
  • Commons logging
  • Logback

The library provides a default logback configuration that directs the logs to the console. It also specifies a log pattern, including identifiers like tenant and request IDs for easy log analysis.

The log format depends on whether the service is running in local development mode or is deployed on Cloud Foundry. For local development, the logs are simply printed line by line, together with a timestamp at the beginning, and font coloring, to be concise and human-readable. After the service deploys in the Cloud Foundry environment, the logs print in a predefined JSON format, which makes them easy to parse for the Logstash log management tool and to display later in tools like Kibana.

Logging Flow

Integration

To use the library, import one of the aggregating YaaS libraries, or add a dependency to it in your pom.xml file as shown in this example:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-logging</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

To use Java logging, first enable the SLF4J bridge as described in the SLF4J-JUL manual. Ensure that there is no logback.xml file in your classpath. If the log pattern in your application does not look like the following examples, it is likely that you have a custom logback.xml file in your classpath.

Logging for local development

The default pattern for local deployment is specified as follows:

    %date{HH:mm:ss} %highlight([%-5level]) %cyan([%logger{36}]) %magenta([%mdc]) %m%n

You can override it by providing your custom logback file named logback-include-custom.xml and placing it in the classpath. For more information, see Log pattern.

The %mdc placeholder prints the content of the MDC log context in the format key1=value1, key2=value2. You can add custom values to the context by invoking the following in your Java code:

MDC.put("key1", "value1");

Logging in JSON format

If the application is deployed to an integrated environment, you might want to automate the analysis and processing of your logs, in which case a more machine-processable log format like JSON is appropriate.

For this, you can set the environment variable LOG_FORMAT_JSON to true. If the service is deployed in Cloud Foundry, the VCAP_APPLICATION environment variable is present and causes the same effect, so the logging format automatically changes to JSON. By default, the JSON is not pretty printed to limit the amount of network traffic used. To see the JSON pretty printed, set the LOG_JSON_PRETTY system property to true when starting the service.

The default JSON format is specified here and might be similar to the following:

{
    "type" : "service",
    "org" : "hybris_prod",
    "space" : "framefrog",
    "service" : "email",
    "version" : "1.3.2",
    "instance" : 0,
    "time" : "2015-03-25T16:25:00.522+00:00",
    "level" : "INFO",
    "tags" : ["myTag1","myTag2"],
    "log" : {
      "logger" : "com.sap.cloud.yaas.email.Sender",
      "message" : "5 Products imported in 34578934754389578 sec",
      "requestId" : "45jkh6456",
      "hop" : 2,
      "client" : "hybris.email",
      "tenant" : "bananas11",
      "vcapRequestId" : "dfgdf789686f",
      "thread" : "thread #56",
      "error" : "java.lang.NullPointerException",
      "stacktrace" : "here goes the stacktrace",
      "appCorrelationId" : "12345l",
      "custom" : {
        "myCustom1": "test1",
        "myCustom21": 5"
      }
    }
}

Every log contains these base attributes:

  • type - The type of application causing the log. This is always service. Other values are reserved for infrastructure applications, such as databases.
  • org - The identifier of the environment the application is running in, configured by the environment variable ENV_NAME. In Cloud Foundry, the value should be the org. The default value is unknown.
  • space - The identifier of the area of the environment, configured by the environment variable TEAM_NAME. In Cloud Foundry, the value should be the space. The default value is unknown.
  • service - The identifier of the application causing the log, configured by the environment variable APP_NAME. The default value is unknown.
  • serviceOwner - The identifier of the tenant that owns the application causing the log. This is usually set for billing purposes.
  • version - The version of the application causing the log, configured by the environment variable APP_VERSION. The default value is unknown.
  • instance - The optional index number of the application cluster instance. This is read from the Cloud Foundry VCAP_APPLICATION environment variable, if present.
  • time - The timestamp of the log in ISO 8601 format: yyyy-MM-dd'T'HH:mm:ss.SSSZ
  • level - The SLF4J log level
  • tags - Use this to add custom tags, which are derived from SLF4J Markers.

The log property contains some important information about the context from which the log originated:

  • logger - The Java class that produced the log
  • message - The log message
  • thread - The thread from which the log originated
  • error - In case of an error log, this contains the class of the exception thrown
  • stacktrace - In the case of an error log, this contains the stack trace of the preceding exception
  • appCorrelationId - Contains a business-specific correlation ID
  • custom - Optional field that contains multiple custom values, either as strings or numbers

The log property also contains information about the REST request itself:

  • requestId- The unique ID identifying the request
  • hop - A number identifying how many times the request was passed from service to service
  • client - The client ID of the caller that makes the request
  • clientOwner - The ID of the tenant that owns the client of the caller that makes the request. This is usually set for billing purposes.
  • tenant - The tenant ID of the caller that is making the request
  • vcapRequestId - The optional request ID that Cloud Foundry maintains. This can change more often than the requestId.

The information related to the REST request is taken from the logging MDC and must be provided there. For example, by having a servlet filter in place setting up the MDC per request. The logging-filters library provides a ready-to-use servlet filter for that purpose.

Configuration and customization

You can perform all configuration by setting environment variables. The following variables are available:

NameDefault ValueTypeDescription
ENV_NAMEunknownstringSets the org attribute for a JSON-formatted log. For example, core-prod or commerce-prod.
TEAM_NAMEunknownstringSets the space attribute for a JSON-formatted log. For example, bananas or toad.
APP_NAMEunknownstringSets the service attribute for a JSON-formatted log. For example, email.
APP_VERSIONunknownstringSets the version attribute for a JSON-formatted log. For example, 3.1.4.
APP_API_VERSIONunknownstringContains the API version of the application. Use it to build the serviceBasePath attribute for JSON-formatted audit log entries. For example, v1.
APP_YAAS_ORGunknownstringContains the identifier of the organization under which the service is registered in YaaS. Use it to build the serviceBasePath attribute for JSON-formatted audit log entries. For example, hybris.
LOG_JSON_PRETTYfalsebooleanFormats JSON logs with line breaks.
A log analyzer usually expects individual logs separated by line breaks. By formatting the JSON logs pretty, your log analyzer might not be able to interpret the logs anymore.
LOG_PATTERN%date{HH:mm:ss} %highlight([%-5level]) %cyan([%logger{36}]) %magenta([%mdc]) %m%nstringLog pattern to use for local development, for logging in a format other than JSON.
LOG_COLORtruebooleanActivate colored logging for local development, for logging in a format other than JSON.
LOG_HYBRIS_LEVELINFOenumLog level for a logger with the name com.sap.cloud.yaas and com.hybris.
LOG_ROOT_LEVELWARNenumLog level for a root logger.
LOG_JSON_INCLUDE_NULL_FIELDSfalsebooleanPrint also null fields of objects that will be serialized and included in the JSON output.

You can customize by adding a custom logback include to the default logback configuration. For this, the default configuration uses an optional include of the classpath resource logback-include-custom.xml. By providing a logback configuration with that name in your classpath, you can override parts of the default configuration or add additional ones. The content of the custom file needs to use the root XML tag included, as shown in this example:

<included>
    <appender ...>
        ...
    </appender>
</included>

For more information, see the Logback Configuration Manual.

Logging level

The bundled logging configuration sends a log with the level WARN to the console, except for a logger starting with the name com.sap.cloud.yaas, which uses the log level INFO. You can change these settings by setting these environment variables:

  • LOG_ROOT_LEVEL
  • LOG_HYBRIS_LEVEL

Starting your application with the -DLOG_ROOT_LEVEL=DEBUG -DLOG_HYBRIS_LEVEL=DEBUG command enables the debug level for all messages.

You can define custom loggers, such as:

<included>
    <logger name="chapters.configuration" level="INFO"/>
</included>

Log pattern

To override the log pattern, set the environment variable LOG_PATTERN.

Starting the application with -DLOG_PATTERN="%date{HH:mm} %m%n" overrides the current log pattern.

Logging metrics

The JSON log format supports treating metrics such as current memory consumption like ordinary logs. Therefore, you can push any metric to stdout, as well. To log a metric value, log a message with an SLF4J Marker named METRIC, which adds a corresponding tag in the JSON log format. A log tagged as METRIC causes a log to conform to that schema and might look similar to the following:

{
    "type" : "service",
    "org" : "hybris_prod",
    "space" : "bananas",
    "service" : "configuration",
    "serviceOwner" : "core",
    "version" : "1.3.2",
    "instance" : "0",
    "time" : "2015-03-25T16:25:00.522+00:00",
    "level" : "INFO",
    "tags" : ["METRIC"],
    "metric" : {
      "group" : "cpuload",
      "values" : {
        "1minAverage" : 2.3,  
        "5minAverage" : 1.3,
        "7minAverage" : 1.3
      },
      "tenant" : "bananas11",
      "client" : "hybris.email",
      "user" : "anonymous@sap.com",
      "clientOwner" : "banana",
      "vcapRequestId" : "dfgdf789686f",
      "appCorrelationId" : "12345l",
      "requestId" : "45jkh6456",
      "hop" : 2,
      "meta" : {
         "region": "us-east",
         "cpus":   "4",
         "az" :    "us-east-a",
         "circuit-breaker": "on"
      }
    }
}

If the tenant, client, user, requestId, vcapRequestId, hop, serviceOwner, or clientOwner are available, they are automatically logged in the MDC, as when logging a metric while in a request scope.

You can use the meta field to log additional custom values in addition to the predefined fields. Values can be either strings or numbers.

For convenience, the logging library provides metric logging helper methods. See the MetricLogger class described in the following section.

Convenience classes

The logging library provides several helper classes that simplify common use cases for logging. They are all based on a builder wrapping an actual SLF4J Logger and facilitating its use in a specific scenario. For example, a helper class could enforce that the METRIC tag is always emitted in a metric logging scenario.

All convenience classes described in the following sections are mutable and are not designed for use in a multi-threaded context. Do not try to reuse an instance of a convenience class for multiple logging operations. Obtain a new instance of such a class for each log operation.

Basic logging

For basic logging purposes, the class SimpleLogger is available, which allows you to set different tags easily. The tags are transformed into a proper SLF4J Marker:

    SimpleLogger
            .newInstance(LoggerFactory.getLogger("myLogger"))
            .tags("TAG1", "TAG2")
            .info("myLogMessage");

Or, more concisely:

    SimpleLogger
            .newInstance(LoggerFactory.getLogger("myLogger"), "TAG1", "TAG2")
            .info("myLogMessage");

 

All logging library classes map tags to SLF4J Markers. However, the semantics of Marker usage are slightly different than the default in SLF4J. The logging library always uses detached Markers, which means that Markers with the same name are still fully independent from each other. In particular, Marker references that are set up globally are never reflected on the Markers that represent tags.

The ParameterLogger is a SimpleLogger that allows you to attach key-value data to a log:

    ParameterLogger
            .newInstance(LoggerFactory.getLogger("myLogger"), "TAG")
            .parameter("key1","valueA")
            .parameter("key2","valueB")
            .info("myLogMessage");

Log audit messages

Another common use case is to mark logs as specific for audits. In this case, use a consistent tag in general. The SimpleAuditLogger enforces the use of an AUDIT tag for you:

    SimpleAuditLogger
            .newInstance(LoggerFactory.getLogger("myLogger"), "ANOTHER_TAG")
            .parameter("key","value")
            .info("myLogMessage");

Log metrics

To switch from the ordinary log format to the metric format, the METRIC tag is required. Furthermore, you must typically provide a metric group name and perhaps a tenant, a client, and a user, in addition to the values themselves. To easily create such a metric log, you can use the MetricLogger.

    MetricLogger
            .newInstance()
            .group("cpuAverages")
            .tenant("myTenant")
            .client("callingClient")
            .user("myUser")
            .addValue("1minAverage", 3)
            .addValue("3minAverage", 4)
            .log();

To log a metric with multiple values, you can use the addValue(valueKey, value) or values(Map<String, Object>) method to automatically serialize the values to a JSON String and store it to the MDC.

Log business metrics

To log business metrics, use the BUSINESS tag in addition to the METRIC tag. The MetricLogger has a factory method for creating a logger that sets those tags automatically. For business metrics, you might want to use the clientOwner and serviceOwner methods to set the owner tenant of the caller and owner tenant of the service being called, respectively. To fill those values, you can use the header values defined in the billing-aware trait.

    MetricLogger
            .newBusinessInstance()
            .metric("myMetricName")
            .group("businessValues")
            .addValue("emailsSent", 999)
            .tenant("myTenant")
            .client("callingClient")
            .user("myUser")
            .clientOwner("callingClientOwner")
            .serviceOwner("calledServiceOwner")
            .log();

Logging for asynchronous calls

The JSON log format retrieves request-related data, such as the current tenant or request ID from the MDC of SLF4J. It is up to you to populate the data into the MDC, for example, using the servlet filter provided in the logging-filters library. As the MDC is based on a ThreadLocal, this strategy works when only one thread is used for processing a request (synchronous). Using asynchronous processing of requests results in different threads requiring the same MDC content during processing. To populate the MDC content to all threads contributing to the processing, additional effort is required. You must ensure that whenever a thread starts processing a task, the MDC content of the calling thread gets populated to the new thread. After the task is complete, remove the MDC content to ensure that the next task does not get outdated MDC data.

To populate the MDC to the threads processing a task, the logging library provides several wrappers for the usual classes related to task execution in Java. You can wrap:

  • java.lang.Runnable with DelegatingMDCRunnable
  • java.util.concurrent.Callable with DelegatingMDCCallable
  • java.util.concurrent.Executor with DelegatingMDCExecutor
  • java.util.concurrent.ExecutorService with DelegatingMDCExecutorService
  • rx.Scheduler of the io.reactivex.rxjava library with DelegatingMDCScheduler

The wrappers ensure that the given delegates are executed with the correct MDC content, which is either given as a constructor parameter or copied from the current thread.

When using thread pools, which means some threads are reused by other asynchronous calls, the MDC content is not reset. Therefore, the variables needed for logging might contain incorrect values. Using the MDC wrappers described here ensures that the MDC is populated but also cleaned for each asynchronous task so that the log entries contain the correct values.

Example

In Java, it is recommended to use an Executor or ExecutorService to start an asynchronous task execution. The code might look similar to the following:

MDC.put(MY_VALUABLE_KEY, "testMdcExecutor");
try
{
    final ExecutorService pool = Executors.newFixedThreadPool(2);
    pool.submit(()->
        {
            //my task logic
            LOG.info("Task finished");
        }
    );
}
finally
{
    MDC.remove(MY_VALUABLE_KEY);
}

Any task executed in this manner does not see the MDC content that was present at the time of task creation. With that, the MY_VALUABLE_KEY is either null or outdated when doing the log. It is assumed that the main thread is processing an incoming request that has a related tenant, meaning the hybris-tenant header is present. It is also assumed that the tenant was populated into the MDC of the main thread using the appropriate servlet filter. Then the tenant is not present during the task execution and therefore not present in the log message. To fix this problem, populate the current MDC content of the main thread to the thread executing the task, but only for the duration of the task execution. To do this, adjust the sample code as follows:

MDC.put(MY_VALUABLE_KEY, "testMdcExecutor");
try
{
    final ExecutorService pool = new DelegatingMDCExecutorService(Executors.newFixedThreadPool(2));
    pool.submit(()->
        {
            //my task logic
            LOG.info("Task finished");
        }
    );
}
finally
{
    MDC.remove(MY_VALUABLE_KEY);
}

By wrapping the ExecutorService with the class that the logging library provides, any task that the ExecutorService schedules has the correct MDC content present.

Hystrix

Hystrix uses a custom ExecutorService, so you must use the interception hook that Hystrix provides to wrap any task execution. To do this, register a custom HystrixConcurrencyStrategy plug-in, which the logging library provides. Make the following call in your application logic while the logging library provides the DelegatingMDCHystrixConcurrencyStrategy:

    HystrixPlugins.getInstance().registerConcurrencyStrategy(new DelegatingMDCHystrixConcurrencyStrategy());


Logging Filters

Motivation

After having the service-sdk-logging library integrated, the logs to the console are looking pretty and consistent. However, dynamic information related to any incoming requests are still missing, specifically the values of the standard hybris headers. The service-sdk-logging-filterslibrary provides a convenient way to provide these values as part of your logs.

Effect

This service-sdk-logging-filters module is dependent on the service-sdk-logging library. It provides a filter to populate the standard hybris headers (such as the hybris-tenant header) attached to any incoming request into the logging context (MDC), so it is available as dynamic fields in the log. This library ships a servlet filter based on the Servlet API, so it must be run in a Servlet API conform container. After the filter reads all of the custom headers of incoming requests and puts the parameters into the logging context (MDC), they are removed at the end of the filter chain. You can extend the functionality of the logging filter.

Configuration

To use the library, import the aggregating library, or import a dependency to it into your pom.xml file. You do not have to import the logging library, because it is imported together with this one.

    <dependency>
      <groupId>com.sap.cloud.yaas.service-sdk</groupId>
      <artifactId>service-sdk-logging-filters</artifactId>
      <version>${service-sdk.version}</version>
    </dependency>

You also need to add the filter of the library to your web.xml file. When your application is based on Spring, the recommended way is to use the Spring delegating filter proxy:

    <filter>
      <filter-name>loggingFilter</filter-name>
      <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
    </filter>

Then, define mappings for them:

    <filter-mapping>
      <filter-name>loggingFilter</filter-name>
      <url-pattern>/*</url-pattern>
    </filter-mapping>

As the proxy looks up a filter object defined in Spring, the preconfigured Spring file must get imported to your Spring context. This is only necessary if you are not using the aggregation library.

    <import resource="classpath:/META-INF/logging-filters-spring.xml" />

If you are not using Spring in your application, you can directly define the filter class to use in your web.xml file:

    <filter>
      <filter-name>loggingFilter</filter-name>
      <filter-class>com.sap.cloud.yaas.servicesdk.loggingfilters.LoggingContextFilter</filter-class>
    </filter>

Besides the request header provisioning, the filter initializes the Java utility logging during its init phase, specifically, the jul-to-slf4j bridge. If this conflicts with your Java utility logging setup, disable it by the servlet init param, log.jul.init or by setting it directly at the filter instance.


Audit Library

The Audit Library lets you take care of changes to data covered under personal data protection laws. Such a data change is called an audit event. These events are managed in the Audit Ingestion service. You can read more about the semantics of audit events in the Audit Ingestion service documentation.

The Service SDK provides a support library for creating audit events in the Audit Ingestion service.

Integration

To use the library, add a dependency to it in your pom.xml file as shown in this example:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-audit</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

Next, import the Spring configuration file:

<import resource="classpath:/META-INF/audit-spring.xml"/>

Or use the non-Spring variant:

public AuditServiceClient auditServiceClient()
{
    final Client auditServiceJerseyClient = ClientBuilder.newClient();
    final DefaultInternalAuditServiceClient defaultInternalAuditServiceClient = new DefaultInternalAuditServiceClient(
            auditServiceJerseyClient);
    defaultInternalAuditServiceClient.setAuditServiceUrl(auditServiceUrl);
    final ScheduledThreadPoolExecutor scheduledExecutorService = new ScheduledThreadPoolExecutor(1);
    final AuthorizedInternalAuditServiceClient authorizedInternalAuditServiceClient = new AuthorizedInternalAuditServiceClient(
            defaultInternalAuditServiceClient, authorizedExecutionTemplate);
    final RetryConfiguration retryConfiguration = new RetryConfiguration();
    retryConfiguration.setMaxRetries(1);
    retryConfiguration.setRetryDelaySeconds(10);
    final RetryingAuditServiceClient internalAuditServiceClient = new RetryingAuditServiceClient(
            authorizedInternalAuditServiceClient,
            retryConfiguration,
            scheduledExecutorService, 1000);
    final DefaultAuditServiceClient auditServiceClient = new DefaultAuditServiceClient(internalAuditServiceClient,
            internalAuditServiceClient);
}

To call the Audit Ingestion service, you must authorize your service. The Audit Library uses the Service SDK's Authorization Library to authorize your service against the Audit Ingestion service. To configure it, you can use the same AuthorizedExecutionTemplate definition as described in the documentation. When the sourceType is SourceType.TENANT, the Audit Library authorizes the request to the Audit Ingestion service against the tenant set as the source. When the sourceType is set to another value, the service uses the tenant that owns your service.

To configure the URL of the Audit Ingestion service, set the environment variable AUDIT_SERVICE_URL.

You must configure your client in the Builder to be able to create audit events. See the documentation for the Audit Ingestion service for more details.

Usage

After you configure the library, the AuditServiceClient instance can send the audit events to the Audit Ingestion service.

The following sections describe how to create each type of event.

Personal data change

To send an audit event for personal data changes, use:

auditServiceClient.publish(
    AuditEventBuilderFactory
        .personalDataChange()
        .serviceBasePath(serviceBasePath)
        .serviceRegion(serviceRegion)
        .userId(userId)
        .source(source)
        .sourceType(SourceType.ACCOUNT)
        .time(timeStamp)
        .addPersonalDataChange(dataName, oldValue, value, AuditEventOperation.CHANGE)
        .dataSubjectType(dataSubjectType)
        .dataSubjectId(dataSubjectId)
        .objectId(objectId)
        .objectType(objectType)
        .build(), new DiagnosticContext(requestId, hop));

Configuration change

For configuration changes, use:

auditServiceClient.publish(
    AuditEventBuilderFactory
        .configurationChange()
        .serviceBasePath(serviceBasePath)
        .serviceRegion(serviceRegion)
        .userId(userId)
        .source(source)
        .sourceType(SourceType.ACCOUNT)
        .time(timeStamp)
        .addConfigurationChange(dataName, oldValue, value, AuditEventOperation.CHANGE)
        .objectId(objectId)
        .objectType(objectType)
        .build(), new DiagnosticContext(requestId, hop));

Security-relevant event

For security-relevant events, use:

auditServiceClient.publish(
    AuditEventBuilderFactory
        .securityEvent()
        .serviceBasePath(serviceBasePath)
        .serviceRegion(serviceRegion)
        .userId(userId)
        .source(source)
        .sourceType(SourceType.ACCOUNT)
        .time(timeStamp)
        .message(message)
        .clientIp(clientIp)
        .build(), new DiagnosticContext(requestId, hop));

When an unknown caller changes an audit, for example failed Basic Authorization attempts, use SourceType.ACCOUNT as the sourceType and set the source to the name of your tenant.

Refer to the JavaDocs or the API of the Audit Ingestion service for information about the meaning of the audit event's properties.

Error handling

The unsuccessful creation of an audit event must be traceable. When an audit event creation fails, the system records a simple log message at an ERROR level, providing all the audit event information except the actual personal data. The logged event information includes everything except the old and new values as well as the client IP address.

Check your logs regularly to verify whether sending audit events fails.

Prefill audit event values

If required data is not specified before creating an audit event, the Audit Library tries to look up the data in the logging context (MDC) and prefill some of the data before sending it to the Audit Ingestion service. Therefore, use the [Logging Filters Library] in addition to the Audit Library to automatically populate the logging context with values from the headers of incoming requests. Alternatively, you can fill in the logging context yourself before sending the audit event. See the following table for an overview of the logging context fields and their meanings.

Incoming request headerMDC field keyAudit event payload or header
hybris-user-iduserIduserId property
hybris-useruseruserId property
hybris-tenanttenantsource and sourceType property
hybris-orgorganizationsource and sourceType property
hybris-userusersource and sourceType property
hybris-hophophybris-hop header
hybris-request-idrequestIdhybris-request-id header
X-Forwarded-ForxForwardedForclientIp property (parsed)

If you do not manually set source and sourceType, the system resolves them in the following order:

  1. When a tenant is provided in the tenant MDC field, source populates from tenant MDC field and sourceType is SourceType.TENANT.
  2. If a tenant is not specified, but there is an organization provided in the organization MDC field, source populates from the organization MDC field and sourceType is SourceType.ORGANIZATION.
  3. If a tenant and organization are not specified, but a user is provided in the userId MDC field, source populates from the userId MDC field and sourceType is SourceType.ACCOUNT.

If sourceType is of type SourceType.TENANT, the value of the source is used as the tenant name against which the library authorizes your service in the Audit Ingestion service.

If you do not set serviceBasePath manually, the system derives it from the APP_YAAS_ORG, APP_NAME and APP_VERSION environment variable values. See logging for more information about setting these environment variables.

If you do not set serviceRegion manually, the system derives it from the REGION_ID environment variable value.

To prevent the automatic population of values, you can always set the values directly in the event creation builder.

Retry implementation

To support retries, the Audit Library uses a queued task execution, internally supporting scheduled retries. When publishing an audit event, the Audit Library creates a task for publishing the event and sends it to the queued task executor. To enable checking the number of currently queued tasks, configure a value greater than zero for the environment variable AUDIT_MAX_SCHEDULED_TASKS. If the system exceeds the limit, the Audit Library does not send the audit event, but logs a warning instead.

If you see a warning about the buffer size in your logs and observe spikes of audit events, set the environment variable AUDIT_MAX_SCHEDULED_TASKS to a higher value. If you observe a constant, high required throughput of audit events, increase the number of threads processing the audit events using the environment variable AUDIT_THREADS.

When sending audit events asynchronously fails, the Audit Library can reschedule another task with a delay. You can configure the delay using the environment variable AUDIT_RETRY_DELAY_SEC, until the system reaches the maximum number of retries. To configure the number of retries, use the environment variable AUDIT_RETRIES. To disable retries, set the value of AUDIT_RETRIES to zero.

If sending the event fails after retries, the Audit Library logs a warning. If you observe availability issues with the Audit Ingestion service, increase the number of retries.

Environment variables

Use these environment variables to configure the Audit Library:

  • AUDIT_SERVICE_URL is the URL of the Audit Ingestion service.
  • AUDIT_THREADS is the number of threads that handle the creation of audit events and creation retries. The default value is 1.
  • AUDIT_RETRIES is the number of times the Audit Library retries sending the audit event until it logs a failure. The default value is 1.
  • AUDIT_RETRY_DELAY_SEC is the delay in seconds between retry attempts. The default value is 10.
  • AUDIT_MAX_SCHEDULED_TASKS is the number of audit tasks the Audit Library can schedule for processing at the same time. Set the value to 0 to remove the limit, but consider the possible effects on memory consumption. The default value is 1000.

Troubleshooting

If you encounter a 403 Forbidden error, make sure that you requested the required scopes for the Audit Ingestion service in the Builder setting for your client.

If you encounter 400 Bad Request errors when creating audit events, refer to the Audit Ingestion service API for the fields and headers the service expects.


Pattern Support

This library provides support to efficiently leverage the standard YaaS RAML patterns that are exposed at https://pattern.yaas.io.

It mainly exposes Java constants for every pattern to avoid typing mistakes and not repeating literals. The following kind of Java classes are available:

  • Common constants - Constants that are not related to specific patterns and are more of a common nature, such as error response types
  • Schema classes and constants - Classes corresponding to the structures defined in schemas, as well as constants for every field
  • Trait constants - Constants for any query parameter or header name used in a trait


RAML Rewriter

The RAML Rewriter is a library for processing a RAML source with different kinds of rules. It comes with a Servlet Filter that you can use to represent an API in different scenarios for the internal or external representation of a service. The internal usage of a service requires you to expand the RAML definition by resolving and inlining all of the includes to have a single file. The external usage also requires the base URI and trait replacement features.

The following sections explain these features in detail.

Trait replacement

The trait replacement functionality converts a trait definition of a RAML file dynamically, to reflect a different API representation for internal and external usage. By default, this functionality replaces any yaasAware and sessionAware trait with OAuth2 usage. This feature allows you to easily get the external representation of a RAML definition out of the RAML definition, reflecting the internal representation.

For example, replace the default configuration of the feature:

- !include https://pattern.yaas.io/[any version]/trait-yaas-aware.yaml

With:

- !include https://pattern.yaas.io/v1/trait-oauth2.yaml

You can replace the usage of the trait in the definition, as well. For example, replace:

    is: [someTrait, yaasAware, somethingElse]

With:

    is: [someTrait, oauth2, somethingElse]
By default, the RAML Rewriter replaces the trait-yaas-aware.yaml and trait-session-aware.yaml file with the trait-oauth2.yaml file, as long as you use the reference of https://pattern.yaas.io.

Base URI replacements

For the external representation, adapt the base URI of the RAML definition to the public domain used by external requests. You can configure the RAML Rewriter to replace the base URI of the RAML definition with a specified URI. Using https://api.beta.yaas.io/myService/v2 as the configured base URI replacement, replace the baseUri:

baseUri: http://my-service.hybris.com/root

With:

baseUri: https://api.yaas.io/myService/v2/root

Expand feature

The expand feature rewrites the entire RAML content by resolving all of the include statements used within the RAML definition. For example, as the result, there are no include statements in the RAML file when you access it through the API Console. The benefit to this approach is having one single file, reused in multiple places, so you only have to make one request to get the RAML content. This results in better performance in most use cases. There are three modes of RAML expanding that you can configure with the RAML Rewriter:

  • NONE performs no expansion, serving the file as-is.
  • COMPACT resolves and inlines all includes in the RAML, but does not inline the references inside JSON schemas.
  • FULL inlines all includes in the RAML and references in JSON schemas, even if it results in the same schema repeated multiple times.

Both the COMPACT and FULL modes cause the RAML file, with its external references, to merge into one single file. The difference between the COMPACT and FULL modes is only how the JSON schemas are handled. If you use a tool that is not able to resolve JSON references by itself, using the FULL mode makes more sense. If you, on the other hand, care about better readability or preserving the relationships between JSON schemas, then use the COMPACT mode. As long as the author of the RAML file took care of its readability, you might prefer to use the NONE mode, in other words, disable the expand feature, for receiving a human-readable file.

The RAML definition must be parsed to resolve all of the include statements, so you must have a valid RAML definition in your service. Otherwise, a runtime exception occurs. The RAML Rewriter ignores an invalid definition by disabling the rewriting feature, but this approach is not recommended.
Because the RAML Rewriter must paste all the RAML files and referenced files, if the referenced file is too big, it can kill your service. Therefore, never include remote resources from untrusted locations in your RAML.

Refer to the description in the RAML Pattern Library documentation to determine how the responses and parameters from different traits and resource type definitions are merged.

The RamlRewriterFilter

The RamlRewriterFilter is a Java Servlet Filter that helps you to include the functionality of the RAML Rewriter into your service implementation. To use the RamlRewriterFilter, add the aggregating libraries or add a dependency to the RAML Rewriter library in your POM:

    <dependency>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-raml-rewriter</artifactId>
        <version>${service-sdk.version}</version>
    </dependency>

Then, configure the RamlRewriterFilter in the web.xml file of your service. To add support for RAML rewriting to all static files in the meta-data/ directory, use the following configuration:

    <filter>
        <filter-name>ramlRewriterFilter</filter-name>
        <filter-class>com.sap.cloud.yaas.servicesdk.ramlrewriter.filter.RamlRewriterFilter</filter-class>
    </filter>
    <filter-mapping>
        <filter-name>ramlRewriterFilter</filter-name>
        <url-pattern>/meta-data/*</url-pattern>
    </filter-mapping>

Be aware that the RamlRewriterFilter does not serve the static resources itself, but merely applies RAML rewriting to resources that other components serve. If not configured otherwise, the default Servlet takes care of serving all static resources of your web application, meaning all files in the src/main/webapp/ directory of your Maven project. However, other Servlets might interfere with the default Servlet, for instance when you map them to the /* URL pattern. In such situations, use the following additional configuration to make sure that the default Servlet can serve static files in the meta-data/ directory successfully:

<servlet-mapping>
    <servlet-name>default</servlet-name>
    <url-pattern>/meta-data/*</url-pattern>
</servlet-mapping>

The RamlRewriterFilter also supports the trait-replacement, caching, base-URI-replacement, and expansion features described here:

  • Trait replacement: Generally, traits need to be replaced when you call a service through the YaaS API proxy. Therefore, the RamlRewriterFilter only performs trait replacement when the hybris-external-url HTTP header is present.
    You can configure trait mappings on the server side using the optional traitMappings init-parameter in the Filter declaration. If you omit the init-parameter, the filter performs the default trait mappings, as described in the preceding section. For details, see the JavaDocs of RamlRewriterFilter.

  • Caching: You can cache rewritten RAML to save processing time in the service. To enable caching, configure the expireAfterSeconds init-parameter in the Filter declaration to a value greater than zero. To cache incorrect results resulting from a failure while expanding the RAML, add the expireInvalidAfterSeconds init-parameter with a value greater than zero. By default, caching is disabled.

  • Base URI replacement: If the hybris-external-url HTTP header is present, its value determines the baseUri replacement. The YaaS API Proxy sets this header to tell the service the external URL under which it is reachable.
    If the header is absent, the URL where the web application is deployed replaces the baseUri in RAML. Such fallback behavior is useful for local development.

  • Expansion: The client controls the expansion feature using a query parameter named expand. The query parameter values correspond to the expansion modes of the RAML Rewriter. Possible values are NONE, COMPACT, and FULL, as suggested by the API best practices.


API Console

The Service SDK API Console is a web front end that presents the RAML API definition of your service to other developers. It is based on the open-source API Console for RAML and integrates into your YaaS service.

After you integrate the API Console, you can visit the URL of your service in a web browser, and be redirected to the API Console web front end. Alternatively, you can visit the web front end at the following location relative to the {baseUri} of your service:

{baseUri}/api-console/index.html

Integration

When creating a new service development project based on the Service SDK Archetype, the API Console is integrated out-of-the-box. Otherwise, you can adjust your service as follows to integrate the API Console:

  1. Your service must be based on a Maven WAR project. Preferably, your services uses the Service SDK Super POM and Spring.

  2. Make sure that your RAML API definition is available at meta-data/api.raml, as suggested in the API Guidelines for YaaS. For a Maven WAR project, put the definition into your webapp directory, as shown:

    src/main/webapp/meta-data/api.raml
    
  3. Add the following dependency to the dependencies section of your Maven pom.xml file:

    <dependency>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-api-console</artifactId>
        <version>${service-sdk.version}</version>
        <classifier>overlay</classifier>
        <type>war</type>
        <scope>runtime</scope>
    </dependency>
    
  4. Add the following servlet mapping to your web.xml file to prevent conflicts with other components of your service:

    <servlet-mapping>
        <servlet-name>default</servlet-name>
        <url-pattern>/meta-data/*</url-pattern>
        <url-pattern>/api-console/*</url-pattern>
    </servlet-mapping>
    
  5. To enable redirects and automatic RAML loading for the API Console, add the following filter configuration to your web.xml file:

    <filter>
        <filter-name>apiConsoleRedirectFilter</filter-name>
        <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
        </filter>
        <filter-mapping>
            <filter-name>apiConsoleRedirectFilter</filter-name>
            <url-pattern>/api-console/index.html</url-pattern>
            <url-pattern>/api-console/</url-pattern>
            <url-pattern>/api-console</url-pattern>
            <url-pattern>/</url-pattern>
        </filter-mapping>
    

    Also, add the following import to your Spring application context:

    <import resource="classpath*:/META-INF/api-console-spring.xml"/>
    

    Alternatively, if you do not use Spring, you can specify the filter-class com.sap.cloud.yaas.servicesdk.apiconsole.helpers.ApiConsoleRedirectFilter in the web.xml file itself.

  6. Configure the RamlRewriterFilter as described in the documentation for the Service SDK RAML Rewriter.

Legacy API Console integration

Earlier versions of the Service SDK API Console were implemented differently, and required the use of Jersey in your service. This approach is referred to as the legacy API Console in the following text.

You can still use the legacy API Console in existing services, but it is deprecated. Support for the legacy API Console is scheduled for removal in a major release of the Service SDK. The following integration guide is retained for thoroughness only.

To integrate the legacy API Console in your service, first add the following dependency to your project, or use the aggregating libraries:

    <dependency>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-api-console</artifactId>
        <version>${service-sdk.version}</version>
    </dependency>

Then, register its JAX-RS feature with your application, as in this example:

    register(ApiConsoleFeature.class);

You can also register the JAX-RS endpoints provided by the API Console separately:

    register(ApiConsoleEndpoint.class);
    register(RootApiConsoleEndpoint.class); //optional

Legacy service metadata

By default, the legacy API Console displays metadata information about the service in the bottom of its UI. This information is extracted automatically from the standard Java's MANIFEST.MF file, if present. The following information displays:

  • Implementation-Title
  • Implementation-Version
  • Build-Time

If you prefer to provide your own values, you can configure these environment variables for your service, respectively:

  • APP_NAME
  • APP_VERSION
  • BUILD_TIME

You must set both the APP_NAME and APP_VERSION variables in order for the system to respect the BUILD_TIME variable. This feature can be useful for testing your service on a stage environment.

Legacy API listing support

By default, the legacy API Console loads RAML from the standard location at {baseUri}/meta-data/api.raml, as described in a preceding section. However, some legacy services do not follow this standard convention yet, and do not publish their RAML at that location. Such services usually have the RAML and a related api/api-listing.json file on the Java classpath. In such cases, the legacy API Console library handles exposing and rewriting the RAML itself, as described here.

When the legacy API Console receives a call, it analyzes the api-listing.json file and determines the RAML based on the first configured fileName in the references array. For example:

    {
     "name": "Document Repository API",
     "references": [
      {
       "namespace": "com.sap.cloud.yaas",
       "id": "document-repository-service",
       "type": "raml",
       "fileName": "document-repository-service.raml"
      }
     ]
    }

This sample api-listing.json results in displaying the legacy API Console browser UI for the api/document-repository-service.raml file on the classpath. The legacy API Console loads the specified RAML file by calling the {baseUri}/api-console/raml/api/document-repository-service.raml resource. The legacy API Console library exposes the {baseUri}/api-console/raml/api resource path and returns any resource located at the api directory on the classpath.

Relying on the legacy API Console in general is deprecated functionality, as is relying on the api-listing.json file in particular. It is recommended that you migrate to the non-legacy API Console, which you can integrate and configure as described in a preceding section of this topic. It is also recommended that you serve your RAML API definition at {baseUri}/meta-data/api.raml, as suggested in the API Guidelines for YaaS.

Integrated clickjacking protection

The ability to embed the API Console into other sites means the API console and the service that exposes it are susceptible to clickjacking attacks. The API Console has built-in protection to prevent such attacks with the X-Frame-Options HTTP response header, used to indicate whether a browser is allowed to render a page in a frame, iframe, or object.

By default, the clickjacking protection is disabled. If you use Spring and the Service SDK's Spring XML configuration, you can enable clickjacking protection using the API_CONSOLE_X_FRAME_OPTIONS_DIRECTIVE property or the API Console redirect filter's constructor. If you do not use the Spring framework and mapped the API Console redirect filter in the web.xml file, you can activate the clickjacking protection using the xFrameOptionsHeader init param.

There are three possible directives for X-Frame-Options:

  • DENY: The page cannot display in a frame, regardless of the site attempting to do so.
  • SAMEORIGIN: The page can only display in a frame on the same origin as the page itself.
  • ALLOW-FROM uri: The page can only display in a frame on the specified origin.


Jersey Support

Jersey is the recommended framework for processing RESTful requests in Java. The archetypes and the Service Generator of the SDK is based on it, as well, and so the Jersey-Support library tries to fix the gap between the RAML Patterns best practices and the actual support of Jersey. For that, this library provides a set of Jersey features, which can be easily registered to your Jersey application.

JerseyFeature

The JerseyFeature will register a big set of exception mappers responsible for mapping all standard exceptions to proper error responses in JSON format.

For example, an uncaught RuntimeException will be mapped by the ThrowableMapper to a proper 500 error response in the appropriate JSON format with the correct error type set.

The feature can be enabled by registering it to your Jersey application as follows:

register(JerseyFeature.class);

JsonFeature

After registering the JsonFeature to your Jersey application:

register(JsonFeature.class);

the Jackson library will be registered for marshalling of application/json documents. The feature provides a default configuration of the marshaller and registers appropriate exception mappers. For example, any JSON syntax error gets mapped properly to a 400 error response in JSON format.

By default, the JsonFeature configures Jackson to omit the POJO's empty arrays and null properties from serialization. In some cases, for example when using the JsonFeature to configure a Jersey client, it might be needed to always include all the properties, regardless of their value. This can be achieved by using the JsonFeature constructor which takes the 'serializeJsonEmptyValues' parameter:

register(new JsonFeature(true));

If further customization is needed, it is also possible to initiate the JsonFeature with a predefined ObjectMapper:

myMapper = new ObjectMapper();
register(new JsonFeature(myMapper));

When using a predefined ObjectMapper, it will be used as given (no values will be overridden) to register the Jackson components.

SecurityFeature

By registering the SecurityFeature to your Jersey application:

register(SecurityFeature.class);

you will be able to restrict access to your Jersey resources using Jersey's RolesAllowed annotation. The feature will register a filter propagating the scopes of the hybris-scopes header (if present) to the security context so that they are evaluable as part of the RolesAllowed annotation evaluation.

BeanValidationFeature

After registering this feature to your Jersey application:

register(BeanValidationFeature.class);

you will have bean validation support for the Jersey resources enabled and appropriate exception mappers registered. This is a prerequisite to get the bean validation annotations generated to your resources by the Service Generator effective.

Request and response logging filter

By registering the RequestResponseLoggingFilter to your Jersey application:

register(RequestResponseLoggerFilter.class);

you will be able to enable the logging of Jersey requests and responses.

To also enable the logging of the request and response entities with a specified maximum number of entity bytes, the RequestResponseLoggingFilter has to be registered by using its appropriate constructor:

register(new RequestResponseLoggerFilter(mySlf4JFilter, maxEntitySizeInByes));

Security information masking

Another feature of this library is to mask any authorization "bearer" or "basic" tokens which might appear when logging the Jersey request and response headers. Therefore, all of the headers are parsed. If a token is detected, it will be automatically masked. The Authorization tokens are detected by searching with the following REGEX pattern:

(Authorization:\s+(Bearer|Basic)\s+)([a-zA-Z0-9-\._~\+\/]+=*)

The second group, which should match the token, will be replaced with the ***** pattern.

PatchFeature

A PUT request should always replace a whole resource, so it cannot be used for partial updates. A POST request should always place a new resource. Currently, there is no notation to send a partial update. This feature is planned to be a part of the JAX-RS standard using the PATCH method. This library integrates the support based on the PATCH RFC drafts.

Effect

After registering the PatchFeature by adding the feature to your Jersey application:

register(PatchFeature.class);

you can put a PATCH annotation to your resource methods, as shown in this example:

@PATCH
@Consumes(PatchMediaType.APPLICATION_JSON_PATCH_JSON) // application/json-patch+json
public MyType patch(MyType pachedMyType);

This method accepts application/json-patch+json content containing the partial update entity. The Patch-Jersey library accepts the request and applies it to the current state of your entity. The current state is retrieved by searching for a GET method in the same resource class having the same PATH annotation, such as the PATCH method. This GET method is called, and the patch entity is applied to the result. Afterward, the patched entity is passed to the PATCH method.

Your resource class must always provide a related GET method for any defined PATCH methods:

@Path("/bla")
@GET
@Consumes(MediaType.APPLICATION_JSON)
public MyType get();

@Path("/bla")
@PATCH
@Consumes(PatchMediaType.APPLICATION_JSON_PATCH_JSON) // application/json-patch+json
public MyType patch(MyType patchedMyType);

Additionally, you can use the JSON merge patch flavor for your API as described in JSON Merge Patch RFC 7386, as shown in this example:

@PATCH
@Consumes(PatchMediaType.APPLICATION_MERGE_PATCH_JSON) // application/merge-patch+json
public MyType patch(MergeDto mergeDtoType);

This method accepts application/merge-patch+json and application/json" content containing the partial update entity.

Known Limitations

  • For JSON Patch, you must provide a GET method in the same resource class with the same PATH annotation that returns the same object type or schema.
  • Currently, only JSON is supported, so a PATCH method and any corresponding GET method must return the application/json type.
  • Your client must support the HTTP PATCH method. Because the Java HTTP client does not, you must use a client connector in your Jersey test. For more information, see the Jersey Client Connectors documentation, and locate this example:
<dependency>
    <groupId>org.glassfish.jersey.connectors</groupId>
    <artifactId>jersey-grizzly-connector</artifactId>
    <version>${jersey.version}</version>
</dependency>

The previous example can be registered to your client using this code:

config.connectorProvider(new GrizzlyConnectorProvider());

Pagination

When designing a service that returns lists of data, you may want to provide the ability to paginate your data – that is, split the data across several pages and also expose “Previous/Next” links as headers.

A pagination-enabled API can be obtained by applying the 'paged' trait and, optionally, the 'countable' trait trait. In order to respect the contract defined by these traits, the pagination library included in the SDK can be used for your Java/Jersey implementation.

When implementing a typical service that delivers paginated data, you may follow these steps:

Step 1

Use the PaginationRequest class to collect all specified paging parameters, usually for a request of a collection getter.

For example, for a resource that defines a collection getter which also implements the functionality of the paged and countable traits, this can be done as follows:

public Response get(CountableParameters countable, PagedParameters paged, ...) {
  return myServiceImplementation.get(new PaginationRequest(paged.getPageNumber(), paged.getPageSize(), countable.isTotalCount()), ...);
}

The paged trait's contract also requires that the default values are set if any of the values are missing in the request, a feature which is automatically done by the PaginationContext.

Step 2

If the pagination is delegated to a dependency service or if you simply wish to access paginated data from another service, the PagingSupport utility class can be used to easily decorate the request with the required pagination query parameters.

For example, when creating a request to a generated client, which also implements the paged and countable traits, pagination query parameters can be applied as follows:

clientBuilder().prepareGet()
  .withPageNumber(paginationRequest.getPageNumber())
  .withPageSize(paginationRequest.getPageSize())
  .withTotalCount(paginationRequest.isCountingTotal())
  .execute();

Step 3

When building a response, you can use the PaginatedCollection, which can represent a page of items and its corresponding pagination information, as well as an optional total number of the items (also beyond the current page).

For example, to prepare a page of items based on a response received from a dependency service, the PagedCollection and the PagingSupport can be used as follows:

result = PaginatedCollection.<MyInterestingListItem>of(resultList)
  .withNextPage(PaginationSupport.extractNextPageFromResponse(response))
  .withTotalCount(PaginationSupport.extractCountFromResponse(response, paginationRequest))
  .withPageNumber(paginationRequest.getPageNumber())
  .withPageSize(paginationRequest.getPageSize())
  .build();

or the short version:

result = PaginatedCollection.<MyInterestingListItem>of(resultList)
  .with(response, paginationRequest).build();

Step 4

As defined in the paged trait contract, you should provide ('next', 'prev', 'self') paginating information as Link headers in the response.

ResponseBuilder responseBuilder = Response.ok(result);
PaginationSupport.decorateResponseWithCount(responseBuilder, result);
PaginationSupport.decorateResponseWithPage(uriInfo, responseBuilder, result);
return responseBuilder.build();

You can also see how the library is used in this sample project.


Servlet Support

The Service SDK's Servlet Support library contains reusable tools and utilities that you can use for pure Java Servlet developing.

Configuration

To use the library, import the aggregating YaaS libraries, or import a dependency to it into your pom.xml file.

<dependency>
    <groupId>com.sap.cloud.yaas.service-sdk</groupId>
    <artifactId>service-sdk-servlet-support</artifactId>
    <version>${service-sdk.version}</version>
</dependency>

Excludable Servlet Filter Wrapper

Using the standard Servlet API, you can map a filter to certain URL patterns, but it is not flexible enough to specify exceptions to those patterns. The Service SDK provides the Excludable Servlet Filter Wrapper as a wrapper filter that delegates the request to the configured filter only if the request context path does not match any of the configured exclusion paths. With that, you can exclude a specific filter from the filter chain for specific request paths. You can define the excludable paths as a comma-separated list of paths. If the request path matches one of given paths, the wrapper filter does not delegate to your filter.

To use the wrapper filter of the library, add it to your web.xml file.

When your application is based on Spring, the recommended method is to use the Spring delegating filter proxy:

<filter>
    <filter-name>excludableFilter</filter-name>
    <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>

Then, define mappings:

<filter-mapping>
    <filter-name>excludableFilter</filter-name>
    <url-pattern>/myPath</url-pattern>
</filter-mapping>

As the proxy looks up a filter object defined in Spring, define a bean in your Spring configuration file:

<bean id="excludableFilter" class="com.sap.cloud.yaas.servicesdk.servletsupport.filters.ExcludableServletFilterWrapper">
      <constructor-arg name="delegate" ref="basicAuthenticationFilter"/>
      <constructor-arg name="excludePaths" value="/myPathToExclude,/mySecondPathToExclude"/>
</bean>
If you already have a filter in your web.xml that you want to wrap into an ExcludableServletFilterWrapper, remove that filter definition from web.xml. For example, if you have a basicAuthenticationFilter already defined in web.xml, remove it prior to applying the preceding example.

If you are not using Spring in your application, you can directly define the filter class to use in your web.xml file:

<filter>

    <filter-name>excludableFilter</filter-name>
    <filter-class>com.sap.cloud.yaas.servicesdk.servletsupport.filters.ExcludableServletFilterWrapper</filter-class>

    <init-param>
        <param-name>excludable-filter-classname</param-name>
        <param-value>com.sap.cloud.yaas.servicesdk.security.basicauthorization.EnforceBasicAuthenticationFilter</param-value>
    </init-param>
    <init-param>
        <param-name>excludable-filter-exclude-paths</param-name>
        <param-value>/myPathToExclude,/mySecondPathToExclude</param-value>
    </init-param>
    <init-param>
        <param-name>httpAuthenticationRealm</param-name>
        <param-value>Basic Authentication for trusted clients</param-value>
    </init-param>
    <init-param>
        <param-name>basicAuthenticationCredentials</param-name>
        <param-value>user1:1234 user2:4321</param-value>
    </init-param>

</filter>

<filter-mapping>
    <filter-name>excludableFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

The class specified as the excludable-filter-classname init-param has to implement the javax.servlet.Filter interface and instantiates using the constructor without arguments. The filter is initiated using the entire list of init-params defined for the Excludable Servlet Filter Wrapper.

Error Message Servlet

If an uncaught exception propagates to the container, it can cause a security risk, since it often contains debugging information that an attacker can potentially use. You can read more about it here.

The Error Message Servlet is a global handler that catches all uncaught exceptions that could otherwise leak to the container, and instead returns an appropriate error message in JSON format, that conforms to the YaaS Error Message Schema. Therefore, in addition to mitigating the security issue, it helps you maintain consistency of your API's error messages.

Configure the servlet in your web.xml in the following way, and make sure to add it after all your other servlets:

<servlet>
    <servlet-name>errorServlet</servlet-name>
    <servlet-class>com.sap.cloud.yaas.servicesdk.servletsupport.servlets.ErrorMessageServlet</servlet-class>
    <load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
    <servlet-name>errorServlet</servlet-name>
    <url-pattern>/WEB-INF/error-message-servlet</url-pattern>
</servlet-mapping>

Map all your uncaught errors to it:

<error-page>
    <location>/WEB-INF/error-message-servlet</location>
</error-page>

If you are using Spring Boot and the YaaS Service SDK Spring Boot Starter library, the servlet is configured out-of-the-box, by default.


Security

The security library contains utilities that can be used to secure your service. For more information, see the JavaDocs of the contained classes.

Import the security library

To use the security library, import the aggregating YaaS libraries, or add the following dependency to your Maven pom.xml file:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-security</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

Basic Authentication Filter

The EnforceBasicAuthenticationFilter is a Servlet filter that can be used to enforce HTTP Basic Authentication for your service, according to RFC 2617.

In YaaS, HTTP Basic Authentication is used to secure communications between the API proxy and a particular YaaS service. To enable this security feature for your service, go to the service details page in the Builder, turn on Basic Authentication, and specify a username and password of your choice. The API proxy will then send this username and password along with every request to your service.

Using the EnforceBasicAuthenticationFilter, your service can verify that it is being called with the correct username and password. Thus it can ensure that all requests are routed through the API proxy (rather than being crafted by a malicious attacker), and all Hybris-specific request headers are trustworthy.

The EnforceBasicAuthenticationFilter follows an all-or-nothing approach, blocking all requests that are not authenticated by an authorized user. This blocking behavior is based on a list of authorized usernames and passwords , which can be configured as shown below.

In this section, the terms user, username, and password are used for consistency with RFC 2617. These terms are not related in any way to the YaaS user or to OAuth 2.0. The username and password used by the EnforceBasicAuthenticationFilter are intended solely to establish trust between the API proxy and a YaaS service. They are not associated with end users or with any kind of personal data.

Using the Basic Authorization filter

You can either setup an instance of the EnforceBasicAuthenticationFilter directly or use Spring.

If you are using the filter directly, you must add the filter declaration to the web.xml file with the basicAuthenticationCredentials and httpAuthenticationRealm init parameters fully configured:

<filter>
  <filter-name>basicAuthenticationFilter</filter-name>
  <filter-class>com.sap.cloud.yaas.servicesdk.security.basicauthorization.EnforceBasicAuthenticationFilter</filter-class>
  <init-param>
    <param-name>httpAuthenticationRealm</param-name>
    <param-value>Basic Authentication for trusted clients</param-value>
  </init-param>
  <init-param>
    <param-name>basicAuthenticationCredentials</param-name>
    <param-value></param-value>
  </init-param>
</filter>
<filter-mapping>
  <filter-name>basicAuthenticationFilter</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

If your application is based on Spring, you can use the following, shorter alternative, using the Spring DelegatingFilterProxy:

<filter>
  <filter-name>basicAuthenticationFilter</filter-name>
  <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
  <filter-name>basicAuthenticationFilter</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

In this case, the EnforceBasicAuthenticationFilter is configured in a Spring application context. Be sure to use the exact name basicAuthenticationFilter as the filter name, since the DelegatingFilterProxy uses it to look up a corresponding filter bean defined in the Spring application context. The security library comes with a corresponding Spring configuration file, which can be imported into your Spring application context:

<import resource="classpath:/META-INF/security-spring.xml" />

This is only necessary if you are not using the Spring configuration snippet for the aggregated libraries, which contains the above.

Enabling Basic Authorization

To secure incoming requests with HTTP Basic Authentication, the authorized users have to be configured. If the filter is used directly, then a value for the basicAuthenticationCredentials init parameter has to be hard-coded in the web.xml file. If Spring is used to instantiate the filter, then authorized users can be configured using the property placeholder called BASIC_AUTHENTICATION_CREDENTIALS. In the latter case, it is advised to set an environment variable of that name, thus avoiding the need to hard-code confidential data in the code of your service.

In both cases, the expected configuration format is the following:

  • Each white-space delimited token holds one pair of username and password.
  • Everything before the first colon (:) in the pair is the username.
  • Everything after the first colon (:) in the pair is the password.

You may notice that the above format supports multiple pairs of usernames and passwords, any of which will be accepted by the filter. This may be useful for testing your service or for migrating to a new username and password.

Here is an example of a Basic Authorization configuration for multiple users:

BASIC_AUTHENTICATION_CREDENTIALS="user1:1234 user2:4321"

When the EnforceBasicAuthenticationFilter is configured with an empty list of authorized users, it is considered inactive and behaves differently. When inactive, requests are never blocked, and always proceed down the Servlet filter chain unaltered. This is useful during development, but it should be avoided whenever your service is deployed on a publicly reachable host.


Authorization

The authorization library comes in handy when your service acts as a client for other YaaS services. It helps your service to authenticate against an OAuth 2.0 Authorization Server and to obtain access tokens from it. These access tokens can then be used to authorize requests to other YaaS services.

Integration into your service

The authorization library is not part of the aggregated libraries. You must add a separate Maven dependency to use it in your service project:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-authorization</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

Configuration

Although you can use one of the provided AccessTokenProvider implementations directly, the most convenient way to use the authorization library is through an AuthorizedExecutionTemplate. You can either set up an instance using Spring or instantiate it in any way you prefer. This is an example Spring configuration:

<bean id="authorizedExecutionTemplate" autowire-candidate="true" class="com.sap.cloud.yaas.servicesdk.authorization.integration.AuthorizedExecutionTemplate">
    <constructor-arg ref="accessTokenProvider"/>
</bean>

<bean id="accessTokenProvider" class="com.sap.cloud.yaas.servicesdk.authorization.cache.SimpleCachingProviderWrapper">
    <constructor-arg>
        <bean class="com.sap.cloud.yaas.servicesdk.authorization.protocol.ClientCredentialsGrantProvider">
            <property name="tokenEndpointUri" value="${OAUTH2_TOKEN_ENDPOINT_URL}"/>
            <property name="clientId" value="${CLIENT_ID}"/>
            <property name="clientSecret" value="${CLIENT_SECRET}"/>
        </bean>
    </constructor-arg>
</bean>

Or use the non-Spring variant:

public AccessTokenProvider accessTokenProvider()
{
    final ClientCredentialsGrantProvider clientCredentialsGrantProvider = new ClientCredentialsGrantProvider();
    clientCredentialsGrantProvider.setClientId(clientId);
    clientCredentialsGrantProvider.setClientSecret(clientSecret);
    clientCredentialsGrantProvider.setTokenEndpointUri(oauth2TokenEndpointUrl);
    final SimpleCachingProviderWrapper simpleCachingProviderWrapper = new SimpleCachingProviderWrapper(
            clientCredentialsGrantProvider);
    return simpleCachingProviderWrapper;
}

The SimpleCachingAccessTokenProviderWrapper provides caching of access tokens so that the same token can be reused for multiple requests. If, for some reason, you don't want such caching, you can use the ClientCredentialsGrantProvider directly instead.

You can obtain suitable CLIENT_ID and CLIENT_SECRET configuration values in the YaaS Builder after you register your service.

Finally, the OAUTH2_TOKEN_ENDPOINT_URL is the URL of the token endpoint of the OAuth 2.0 Authorization Server. In the case of the YaaS OAuth 2.0 service, the token endpoint can be found at relative path /token.

It is recommended to inject these configuration values from environment variables and not to commit them to your source code. If you created your project from the Service SDK Archetype, you can use the above example Spring configuration as is – the Spring setup will automatically replace the placeholders with the values of corresponding environment variables.

Authorizing your service requests

You can reuse a singleton instance of the AuthorizedExecutionTemplate class, as configured above, throughout your service. Whenever you have a piece of code that does requests to other YaaS services and needs an access token, you can wrap it in an AuthorizedExecutionCallback, as shown here:

final Response response = authorizedExecutionTemplate.executeAuthorized(
    new AuthorizationScope(tenant, Arrays.asList("myteam.my_example_scope")),
    new DiagnosticContext(requestId, hop),
    new AuthorizedExecutionCallback<Response>()
    {
        @Override
        public Response execute(final AccessToken token)
        {
            // execute requests to other YaaS services with the given token in the "Authorization" header
            // return Response object
        }
    });

Handling invalid token errors

If the authorization token is invalidated before the refresh period, sending a request with it will always cause a 401 or 403 HTTP error response. It is important that you handle such cases accordingly so that the invalid token will be refreshed as soon as it becomes invalid. One way to do this is to ensure that an AccessTokenInvalidException is thrown on 401 and 403 error responses in your execution callback, as shown in the following example:

new AuthorizedExecutionCallback<Response>()
{
    @Override
    public Response execute(final AccessToken token)
    {
        // execute requests to other YaaS services with the given token in the "Authorization" header
        if (Response.Status.FORBIDDEN.getStatusCode() == response.getStatus() || Response.Status.UNAUTHORIZED.getStatusCode() == response.getStatus())
        {
            throw new AccessTokenInvalidException(message, token);
        }
        return response;
    }
};

You can also configure custom exceptions on which the token should be refreshed by specifying the refreshExceptions property of the AuthorizedExecutionTemplate:

    <bean id="authorizedExecutionTemplate" class="com.sap.cloud.yaas.servicesdk.authorization.integration.AuthorizedExecutionTemplate">
        <constructor-arg ref="accessTokenProvider"/>
        <constructor-arg name="refreshExceptions">
            <set value-type="java.lang.Class">
                <value>com.my.custom.MyTokenInvalidException</value>
            </set>
        </constructor-arg>
    </bean>

This setting will overwrite the default refresh behavior on the AccessTokenInvalidException being thrown.

Authorizing your JAX-RS client requests with the OAuth2Filter

When you are using a JAX-RS implementation (like Jersey) to make requests to other YaaS services, there is an even simpler way to authorize your requests. For this purpose the authorization library provides the OAuth2Filter class, a JAX-RS component that can be registered with any JAX-RS client.

The OAuth2Filter makes use of an AccessTokenProvider (as described above) to acquire the actual access tokens. It can also be configured with an AuthorizationScope, which specifies the scope to request when acquiring access tokens. Altogether, configuring and registering the OAuth2Filter may look somewhat like this:

final OAuth2Filter oAuth2Filter =
        new OAuth2Filter(
                accessTokenProvider,
                new AuthorizationScope(tenant, Arrays.asList("myteam.my_example_scope")),
                1);

final Client client =
        ClientBuilder
                .newClient()
                .register(oAuth2Filter);

You can now use the resulting JAX-RS Client for any kind of request. The registered OAuth2Filter will transparently obtain access tokens, and add them to the Authorization header of each request. Whenever a request fails because of an invalidated access token, the OAuth2Filter will automatically acquire a new one and retry the request.

By default the OAuth2Filter will request the configured AuthorizationScope when acquiring an access token. However, it is not always desirable to use the same AuthorizationScope for each request – in particular when multiple tenants are involved. In such situations, the AuthorizationScope can be overridden on a per-request basis. Simply use a JAX-RS request property with the name defined by the OAuth2Filter.PROPERTY_AUTHORIZATION_SCOPE constant, as illustrated in the following:

final Response response =
        client.target(myUrl)
                .request()
                .property(
                        OAuth2Filter.PROPERTY_AUTHORIZATION_SCOPE,
                        new AuthorizationScope(tenant, Arrays.asList("myteam.my_example_scope")))
                .get();

Using the authorization library with Hystrix

The Hystrix library enables you to improve the resiliency of your system by wrapping API calls and adding latency tolerance and fault tolerance logic. There are two aspects that should be taken care of when using the authorization library together with Hystrix. One is wrapping the call that requests the token, and the other is making sure that the refresh token exceptions are not suppressed while wrapping all other calls.

Applying Hystrix to the authorization call

When you use Hystrix, you typically wrap each call to another service in a HystrixCommand so it is isolated and does not propagate recurring errors down the call hierarchy. The call which obtains the access token is not an exception to that. However, if you wrap it in a Hystrix command when you use the authorization library from the SDK, the process is slightly different, as explained in the following section.

In order to wrap the access token request in a Hystrix command, you must add a custom wrapper around the ClientCredentialsGrantProvider. First, you must provide an implementation of AccessTokenProvider which wraps each call in a HystrixCommand. Your minimal definition of the wrapper might look similar to the following:

public class HystrixAccessTokenProviderWrapper implements AccessTokenProvider
{
    private AccessTokenProvider wrappedProvider;

    @Override
    public boolean isEnabled()
    {
        return wrappedProvider.isEnabled();
    }

    @Override
    public AccessToken acquireToken(final AuthorizationScope scope, final DiagnosticContext context)
    {
        return new HystrixCommand<AccessToken>(HystrixCommand.Setter //
            .withGroupKey(HystrixCommandGroupKey.Factory.asKey("myservice")) //
            .andThreadPoolKey(HystrixThreadPoolKey.Factory.asKey("myservice-oauth")) //
            .andCommandKey(HystrixCommandKey.Factory.asKey("myservice-oauth-token-get")))
        {
            @Override
            public AccessToken run()
            {
                return wrappedProvider.acquireToken(scope, context);
            }
        }.execute();
    }

    @Override
    public void invalidateToken(final AccessToken token)
    {
        wrappedProvider.invalidateToken(token);
    }

    public void setWrappedProvider(final AccessTokenProvider wrappedProvider)
    {
        this.wrappedProvider = wrappedProvider;
    }
}

Next, wrap the ClientCredentialsGrantProvider into the newly defined token provider, as in the following Spring configuration:

<bean id="accessTokenProvider" class="com.sap.cloud.yaas.servicesdk.authorization.cache.SimpleCachingProviderWrapper">
    <constructor-arg>
        <bean class="com.your.service.hystrix.HystrixAccessTokenProviderWrapper">
            <property name="wrappedProvider">
                <bean class="com.sap.cloud.yaas.servicesdk.authorization.protocol.ClientCredentialsGrantProvider">
                    <property name="tokenEndpointUri" value="${OAUTH2_TOKEN_ENDPOINT_URL}"/>
                    <property name="clientId" value="${CLIENT_ID}"/>
                    <property name="clientSecret" value="${CLIENT_SECRET}"/>
                </bean>
            </property>
        </bean>
    </constructor-arg>
</bean>

Handling invalid token errors while using Hystrix

One of the previous sections explained how to handle invalid token errors when using the authorization library. If you use Hystrix together with the authorization library, you should take special care of any of the exceptions defined as the refreshExceptions property of the AuthorizedExecutionTemplate. (By default, it is the AccessTokenInvalidException).

By the design of the authorization library, the business logic is executed inside the executeAuthorized method. If your business logic call is wrapped into another Hystrix command, you must remember to wrap any "refreshException" thrown there into a HystrixBadRequestException before re-throwing it. This prevents Hystrix from applying the fault tolerance and fallback behavior and ensures that the exception arrives at the authorization library and actually triggers the token refresh.

For example, in another part of your project, you may implement the following logic:

        return new HystrixCommand<AccessToken>(HystrixCommand.Setter //
            .withGroupKey(HystrixCommandGroupKey.Factory.asKey("myservice")) //
            .andThreadPoolKey(HystrixThreadPoolKey.Factory.asKey("myservice-items")) //
            .andCommandKey(HystrixCommandKey.Factory.asKey("myservice-item-get")))
        {
            @Override
            public AccessToken run()
            {
                try
                {
                    return wrappedPersistence.getItem(itemId);
                }
                //wrap the AccessTokenInvalidException exception, to assure it triggers token refresh
                catch (final AccessTokenInvalidException accessTokenInvalidException)
                {
                    throw new HystrixBadRequestException(accessTokenInvalidException.getMessage(), accessTokenInvalidException);
                }
            }
        }.execute();

For more information, refer to the JavaDocs of the authorization library.


Monitoring

The Service SDK provides a monitoring library that allows you to easily expose information about your running service to an external monitoring tool.

This library provides instance-specific metrics, such as the instance's available memory, current values of the JVM variables, or the state of custom business domain variables.

Be aware that to use this library you must set up your own Riemann instance.

Import the monitoring library

In order to use the monitoring library, add a dependency to the monitoring module of Service SDK:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-monitoring</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

When using Spring, you can wire up the monitoring classes by including the monitoring specific configuration to your Spring context:

<import resource="classpath:/META-INF/monitoring-spring.xml" />

Instance-specific metrics

Monitoring has two main components: JVM property values and any other custom value that is added to the monitoring. For operational reasons, there is a custom monitor provided by default called Service_IP_PORT. It monitors the local IP and host of the service on Cloud Foundry.

The monitoring library can detect whether the service is deployed locally or to Cloud Foundry. Monitoring is disabled when you run your service locally. When deployed to Cloud Foundry, with the VCAP_APPLICATION system property set, monitoring is enabled by default, unless specified otherwise with environment variables or through the code.

By default, it sends all monitoring metrics to the logging framework, and with that, streams it as JSON documents to the standard console output of the application. Infrastructure services can pick up such output to process the metrics and make them available through tools like Graphite or Kibana. For details on how the logs display and how to modify them, see the logging library documentation of the Service SDK.

You can configure the following environment variables or system properties:

NameDefault ValueTypeDescription
MONITORING_ENABLEDfalsebooleanEnables monitoring. If the VCAP_APPLICATION environment variable is present, the default is true.
MONITORING_POLLING_TIME120longMetrics polling interval in seconds. Controls how often all metrics are polled and pushed to the logging framework.

Features

The monitoring library provides out-of-the box support for polling standard JVM values. You can specify application-specific values to monitor. By default, the monitoring library picks up values using JMX queries. The default queries are:

"java.lang:type=Memory,*"
"java.lang:type=OperatingSystem,*"
"java.lang:type=Threading,*"

However, you can override these default values. To use custom values, provide your class implementations of the metric providers to monitor as described in the next section.

Customization and usage

If you do not need to change the default behavior or monitor any custom values, you can use the out-of-the-box monitoring support in your service and then upgrade your dependencies to the latest service-sdk version, which is also part of the aggregation library.

Annotate your custom method with the @Monitor annotation for retrieving the values in a class, which implements the interface CustomMonitor in the com.sap.cloud.yaas.servicesdk.monitoring.metrics package:

    import com.netflix.servo.annotations.DataSourceType;
    import com.netflix.servo.annotations.Monitor;

    /**
    * Bean providing service's metrics to expose for monitoring.
    */
    public class EmailMetricProvider implements CustomMonitor
    {
        private static final int INITIAL_VALUE = 0;
        private final AtomicInteger failedMailCount = new AtomicInteger(INITIAL_VALUE);

        public void process(final Exchange exchange)
        {
            failedMailCount.incrementAndGet();
        }

        /**
         * Resets the failed mails counter.
         */
        public void reset()
        {
            failedMailCount.set(INITIAL_VALUE);
        }

        @Monitor(name = "failed emails count", type = DataSourceType.GAUGE)
        public int getFailedMailCount()
        {
            return failedMailCount.get();
        }
    }

Add this class in the constructor arguments of the monitoring service factory:

    <!-- customize monitoring metrics by overriding service bean -->
    <bean id="monitoringService" init-method="start" destroy-method="stop"
        factory-bean="monitoringServiceFactory" factory-method="createMonitoringService" >
        <constructor-arg>
            <bean class="com.sap.cloud.yaas.servicesdk.monitoring.metrics.MonitoringConfiguration">
                <property name="monitors">
                    <util:map>
                        <entry key="emailMetricProvider" value-ref="emailMetricProvider"/>
                    </util:map>
                </property>
            </bean>
        </constructor-arg>
    </bean>

Use the MonitoringConfiguration class to customize the monitoring behavior by setting the following attribute values:

  • monitoringObserverPrefix - The text prefixed to every metric name.
  • monitoringObserverAddress - The server address of the monitoring application or the address of the Riemann server when the RiemannMetricObserver is used. (For more information, see the next section of this topic.)
  • jvmMonitorFilters - JMX query filters
  • monitoringEnabled - Specify whether to enable monitoring
  • monitors - Custom monitor classes
  • observerFactories - Specify whether to set a different kind of observer, such as RiemannMetricObserver. For more information, see the next section of this topic.
  • pollInterval - How often to gather metrics
  • timeUnit - The unit for the poll interval

Supported MetricObservers

By default, all metrics are pushed to the logging framework. The bridge to the logging framework is implemented at the LoggingMetricObserver class, which is one implementation of the MetricObserver interface. The monitoring library supports some more observers:

  • RiemannMetricObserver - To push metrics to a Riemann server using the Riemann protocol via TCP.
  • GraphiteMetricObserver - To push metrics to a Graphite server using the Graphite protocol via TCP. A Riemann server can be configured as well, as it supports the same protocol, but be aware that non-numeric metrics are getting dropped.

You can configure the observers to use for monitoring at the monitoringServiceFactory, as mentioned in the previous section using the observerFactories attribute.

Both additional observers need to have the address of the target server configured. You can configure this using either the monitoringServiceFactory.monitoringObserverAddress attribute or the MONITORING_ENDPOINT environment variable:

MONITORING_ENDPOINT = your.riemann.instance.com:5555

Hystrix integration

If you are using Hystrix for isolating the calls to the remote dependencies, you can integrate the metrics that the Hystrix commands gather by adding a dependency:

<dependency>
    <groupId>com.netflix.hystrix</groupId>
    <artifactId>hystrix-servo-metrics-publisher</artifactId>
    <version>${hystrix.version}</version>
</dependency>

Then, provide this line of code so that it executes when your application starts:

HystrixPlugins.getInstance().registerMetricsPublisher(HystrixServoMetricsPublisher.getInstance());

Non-Spring implementation

There is an exposed com.sap.cloud.yaas.servicesdk.monitoring.metrics.MonitoringServiceFactory factory to obtain fully constructed objects as needed. You can also override different parts.


Ping page

The ping library provides functionality to easily expose an endpoint to test the reachability of the service instances.

Import the ping library

To use the ping library, add a dependency to the ping module of Service SDK:

<dependency>
  <groupId>com.sap.cloud.yaas.service-sdk</groupId>
  <artifactId>service-sdk-ping</artifactId>
  <version>${service-sdk.version}</version>
</dependency>

Ping endpoint

Ensure the ping endpoint is accessible under /ping, and that it delivers a plain text response for a GET request. The plain text response body is customizable and its default value is OK.

Use the Jersey endpoint

To have the /ping endpoint available out of the box, register the DefaultPingResource in your service's implementation of Jersey's ResourceConfig file after building the project:

public MyApplicationResourceConfig() {
  register(DefaultPingResource.class);
}

Customize the ping text message

You can customize the ping endpoint's response text message, which is OK by default, by using the setter of the resource class setPingOkMessage.

Example:

        final DefaultPingResource customPingMessage = new DefaultPingResource();
        customPingMessage.setPingOkMessage("Better OK message");
        register(customPingMessage, PingResource.class);


Plug-ins

The plugins directory contains Maven plugins to use in your Maven-based project:

The generator Maven plug-in

The service-sdk-generator-maven-plugin provides support for exposing JAX-RS endpoints and/or generating client code based on RAML API definitions.

To do this, the generate-service and generate-client Maven goals are exposed.

The generate-service plug-in goal

This goal provides support to expose your RAML API as JAX-RS endpoints in your Java project. You can also use it to keep the implementation in sync with your API definition. For more information on binding the plug-in execution to your build lifecycle, see the Generating Services section.

Following is the plug-in with a minimal configuration:

<pluginManagement>
    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <configuration>
            <outputApiTitle>${Name-of-generated-JAX-RS-Feature}</outputApiTitle>
            <outputPackageName>${Java-package-for-generated-code}</outputPackageName>
            <outputFolderMainGenerated>src/main/java</outputFolderMainGenerated>
        </configuration>
    </plugin>
</pluginManagement>

After you add the plug-in to your pom.xml file, you can invoke the generation by running the following:

mvn servicegenerator:generate-service

For more information about possible plug-in configurations, as well as about what code generates and in what way, see the Generating Services section.

The generate-client plug-in goal

Use this Maven goal to generate client code for a RAML API definition during the project build process.

To configure the generator plug-in to run the generate-client goal, modify or extend the service-sdk-generator-maven-plugin configuration.

    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <executions>
            <execution>
                <id>client-generation1</id>
                <goals>
                    <goal>generate-client</goal>
                </goals>
                <configuration>
                    <sourceRamlUri>${URL-of-RAML-API}</sourceRamlUri>
                </configuration>
            </execution>
        </executions>
    </plugin>
You must configure either the sourceRamlUri or sourceRamlFile parameter that points to the RAML API definition used to generate a client.

For more information about this feature of the Generator plug-in, see the Generating Clients section.

Eclipse IDE integration

If you are using Eclipse IDE and want to ensure that it does not raise "Plug-in execution not covered by lifecycle configuration" errors, add or merge the following snippet into your pom.xml file:

    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.eclipse.m2e</groupId>
                <artifactId>lifecycle-mapping</artifactId>
                <version>${eclipse.m2e.version}</version>
                <configuration>
                    <lifecycleMappingMetadata>
                        <pluginExecutions>
                            <pluginExecution>
                                <pluginExecutionFilter>
                                    <groupId>com.sap.cloud.yaas.service-sdk</groupId>
                                    <artifactId>service-sdk-generator-maven-plugin</artifactId>
                                    <versionRange>[${service-sdk.version},)</versionRange>
                                    <goals>
                                        <goal>generate-service</goal>
                                        <goal>generate-client</goal>
                                    </goals>
                                </pluginExecutionFilter>
                                <action>
                                    <ignore />
                                </action>
                            </pluginExecution>
                        </pluginExecutions>
                    </lifecycleMappingMetadata>
                </configuration>
            </plugin>
        </plugins>
    </pluginManagement>
Existing development tools are often confused with code generation approaches. Because of that, you can unintentionally introduce compilation problems by editing files in your IDE before the code generation phase completes. One known problem occurs in Eclipse when it automatically removes imports that point to classes which cannot be found. After rebuilding the project, always refresh your project in your IDE first if the code generation phase fails. Be aware that the IDE can try to fix the compilation errors by automatically removing missing imports.


Utilities

API Designer

The API Designer is a standalone application (.war) used to write RAML files. To start it, place service-sdk-api-designer-jetty-console.war in the /api folder of your application. You can also start it from the command line using the following command:

java -jar service-sdk-api-designer-jetty-console.war --headless
The API Designer is not currently supported in Internet Explorer.


Generate Services

One of the plug-ins that the Service SDK provides is the service-sdk-generator-maven-plugin. The purpose of the plug-in is to support exposing JAX-RS endpoints based on the RAML file, which contains your API definition. To enable and configure the plug-in in your project for the basic use case, refer to the Plug-ins section. This document provides more detailed information about possible plug-in configurations and about the code generation itself.

General concept

The generate-service goal of the plug-in exposes your RAML API definition as generated JAX-RS endpoints in your Java project. These endpoints reflect all the resources and actions from the underlying RAML file. Furthermore, test skeletons also generate so that you have a test setup covering all endpoints out-of-the-box.

The RAML API definition

After you install and configure the plug-in as described in the Plug-ins section, you can invoke the plug-in by executing the generate-service goal. To generate JAX-RS endpoints, the plug-in needs to find your API definition. By default, the plug-in looks for the definition inside your project, in the following RAML file:

src/main/webapp/meta-data/api.raml

Putting your RAML API definition inside the webapp directory has the additional benefit that the running service automatically publishes your API. Relative to the base-URI of the service, it is available at meta-data/api.raml as suggested by YaaS' API best practices.

Plug-in configuration

The preceding convention is all you need to know to generate code from your RAML API definition. Nevertheless, the service-sdk-generator-maven-plugin supports numerous configuration parameters that you can use to customize its behavior.

Here is the full list of configuration options, together with their default values. All are optional.

Configuration parameterDescriptionDefault
sourceRamlFileLocation of the RAML file that specifies the API from which to generate the service.src/main/webapp/meta-data/api.raml
inputFoldersA list of directories that contain additional input files needed during service generation. All these directories are used for URI resolution. For example, when including additional definitions in RAML or JSON-Schema files.
These directories are also scanned for the legacy api-listing.json file, as described in this topic.
src/main/webapp/meta-data and the legacy src/main/resources/api
outputPackageNameThe Java package name to use for the generated sources.${groupId}.${artifactId}
outputSubpackageNameDtoIndicates the subpackage name where the DTOs generate, relative to the outputPackageName.(empty string)
outputSubpackageNameImplIndicates the subpackage name where the resource and test implementations generate, relative to the outputPackageName.(empty string)
outputSubpackageNameParamIndicates the subpackage name where the parameter POJOs generate, relative to the outputPackageName.(empty string)
outputApiTitleThe API title to use for generated code that is global to the whole API. For example, the generated Jersey Feature.Api
outputFolderMainSourcesThe output directory that holds the generated resource implementation stubs. The Service Generator does not modify any classes that already exist in this output directory.src/main/java
outputFolderMainResourcesThe output directory that holds generated XML files that define Spring beans for resource implementations.src/main/resources/META-INF
outputFolderMainGeneratedThe output directory that holds generated resource interfaces and DTOs. The Service Generator overwrites any pre-existing contents of this output directory, and generates all the resource interfaces and DTOs anew. Therefore, do not manually modify the Service Generator outputs in this directory. Also, there is no need to put the outputs under Source Code Management.target/generated-sources/service
outputFolderTestSourcesThe output directory that holds generated stubs for jUnit tests for resource implementations. The Service Generator does not modify any classes that already exist in this output directory.src/test/java
outputFolderTestGeneratedThe output directory that holds generated code that is required for testing. The Service Generator overwrites any pre-existing contents of this output directory, and generates them anew. Therefore, do not manually modify the Service Generator outputs in this directory. Also, there is no need to put the outputs under Source Code Management.target/generated-test-sources/service
outputTypesIndicates which types of files to generate. By default, service generation generates all file types. The available types are:
  • DTO - the DTOs
  • RESOURCE - the Jersey resource interfaces
  • RESOURCE_IMPL - the default resource implementations
  • TEST - the test interfaces
  • TEST_IMPL - the default test implementations
  • FILE - any non-Java files, including the package-info.java files.
Partially generating files can cause compilation errors.
DTO,RESOURCE,RESOURCE_IMPL,TEST,TEST_IMPL,FILE
asyncIndicates whether the generated sources support asynchronous processing, as described below.false
versionableIndicates whether to append the RAML's version attribute when constructing the package names.false
contextsA list of XML files that contain Spring application contexts. These define the Spring beans that are available to the Service Generator. Use the contexts to customize the behavior of the Service Generator."generator-core/generator-core-context.xml"", service-generator/service-generator-context.xml"
skipSkip service code generation. The generated files in the target folder are still added to the classpath.false
compatibilityModeEnabledIndicates whether compatibility mode is active. Disabling compatibility mode activates features, from version 4.11 and higher, that can break backward compatibility.true
forceOverwriteIndicates whether existing source files are overwritten during generation.false
outputFieldPrefixEnabledIndicates whether field names in DTOs should be prefixed by underscore.true

The configuration properties that you typically customize in your pom.xml file are:

  • outputApiTitle, for example, MyAwesomeService
  • outputPackageName, for example, com.mycompany.myservice
  • outputSubpackageNameDto, such as dto
  • outputSubpackageNameImpl, such as impl
  • outputSubpackageNameParam, such as param
  • sourceRamlFile and inputFolders

To have all of the generated files outside of your target folder and have full control over them, without mixing them with the implementation files, specify the following property with the example value:

<configuration>
    <outputFolderMainGenerated>src/main/java</outputFolderMainGenerated>
</configuration>

That value forces the generation of DTOs and resource interfaces into the src/main/java folder instead of the default target folder.

To see the command line parameters that correspond to each of the preceding configuration properties, run the following:

mvn help:describe -Dplugin=servicegenerator -Ddetail

Partial generation

When you generate your files outside of the default target folder, you might want to regenerate only part of the files using the outputTypes configuration property:

mvn servicegenerator:generate-service -Dservice-generator.output.types=DTO,RESOURCE

To force overwriting, set the overwrite property to true:

mvn servicegenerator:generate-service -Dservice-generator.output.types=DTO,RESOURCE -Dservice-generator.output.overwrite=true

To permanently generate only a subset of files, set the outputTypes property in the plug-in configuration:

<configuration>
    <outputFolderMainGenerated>main/src/java</outputFolderMainGenerated>
    <outputTypes>
       <outputType>DTO</outputType>
       <outputType>TEST</outputType>
       <outputType>RESOURCE</outputType>
     </outputTypes>
</configuration>

Generate source files outside of the main source folder

You might want to generate the DTOs as well as resource interfaces into a separate source folder. This ensures that the generated code that you typically do not need to modify is kept separate from the production code. It can be useful, for example, when you have your own code quality rules to which the generated code does not conform.

To generate into the src/generated/java folder, configure the following:

<configuration>
    <outputFolderMainGenerated>src/generated/java</outputFolderMainGenerated>
    ...
</configuration>

Add the new sources to the Maven sources:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>build-helper-maven-plugin</artifactId>
    <version>${build.helper.maven.plugin.version}</version>
    <executions>
        <execution>
            <id>add-source</id>
            <phase>generate-sources</phase>
            <goals>
                <goal>add-source</goal>
            </goals>
            <configuration>
                <sources>
                    <source>src/generated/java</source>
                </sources>
            </configuration>
        </execution>
    </executions>
</plugin>

Generate service code on each build

To ensure that the RAML definition of your service and the implementation synchronize by default, you can enable constant generation on every build. To do so, add the plug-in execution inside your build tag in the pom.xml file:

<plugin>
    <groupId>com.sap.cloud.yaas.service-sdk</groupId>
    <artifactId>service-sdk-generator-maven-plugin</artifactId>
    <executions>
        <execution>
            <goals>
                <goal>generate-service</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Because no generated files are overwritten by default, the easiest solution is to generate the DTOs and resource interfaces into the target folder, which causes the service to regenerate the files after every execution of mvn clean. For example:

<pluginManagement>
    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <configuration>
            <outputFolderMainGenerated>target/generated-sources/service</outputFolderMainGenerated>
            ...
        </configuration>
    </plugin>
</pluginManagement>

The resource and test implementations are not overwritten by default, which is good, as you might have your business logic there already.

Be aware that this approach might cause compilation errors when you build your project after changing the RAML file. In the end, this is the goal of this approach, to receive an immediate notification when the RAML definition does not match the implementation.

Import the sources from the target folder to Eclipse IDE

If you are using Eclipse IDE, you can configure the m2eclipse plug-in to automatically add the target sources in the the Java Build Path of your project. Add the following snippet in the pluginManagement section:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-eclipse-plugin</artifactId>
    <version>${maven.eclipse.plugin.version}</version>
    <configuration>
        <additionalBuildcommands>
            <buildCommand>
                <name>org.eclipse.m2e.core.maven2Builder</name>
            </buildCommand>
            <buildCommand>
                <name>org.springframework.ide.eclipse.core.springbuilder</name>
            </buildCommand>
        </additionalBuildcommands>
        <additionalProjectnatures>
            <projectnature>org.eclipse.m2e.core.maven2Nature</projectnature>
            <projectnature>org.springframework.ide.eclipse.core.springnature</projectnature>
        </additionalProjectnatures>
        <sourceInclusions>
            <sourceInclusion>target/generated-sources/service/**</sourceInclusion>
            ...
        </sourceInclusions>
    </configuration>
</plugin>

After that, run the following command before importing your project to Eclipse.

This command resets all other previous modifications to the build path.
mvn eclipse:m2eclipse
When you import the project to Eclipse, choose the Import new Maven Project option.

Legacy API-listing file

The preferred way to configure service generation is through the aforementioned configuration parameters of the Maven plug-in. However, for historical reasons, an alternative, legacy approach exists. It assumes that one of the configured inputFolders, by default src/main/resources/api, contains an api-listing.json file.

The purpose of the api-listing.json file is to list one or more APIs of the project, specify information related to code generation, such as the package name used to generate the files, and include a reference to the RAML file describing the API. The api-listing.json file is a JSON file in the following format:

{
 "name": "Wishlist Service",
 "references": [
   {
     "namespace": "wishlistservice",
     "description": "Service for managing wishlists",
     "path": "/",
     "type": "RAML",
     "fileName": "wishlistservice.raml" ,
     "packageName" : "com.sap.cloud.yaas.wishlistservice.web",
     "async" : false
   }
 ]
}
The legacy api-listing.json file is deprecated. Future versions of the Service SDK will not support this configuration approach. It is recommended that you configure service generation through the configuration parameters of the Maven plug-in instead.

Third-party dependencies

The generated code is based on the JAX-RS specification, and your project is dependent on the library containing the specification after code generation. However, you need to have an implementation of the specification present in your classpath to get the code running, and the recommended implementation is Jersey. The generated test classes are based on the Jersey test framework, so the test scope of the generated code introduces a dependency to Jersey.

This example Maven configuration illustrates the typical setup of a project using the code generator:

<dependency>
    <groupId>javax.ws.rs</groupId>
    <artifactId>javax.ws.rs-api</artifactId>
    <version>${jaxrs.version}</version>
</dependency>
<dependency>
    <groupId>org.glassfish.jersey.containers</groupId>
    <artifactId>jersey-container-servlet</artifactId>
    <version>${jersey.version}</version>
</dependency>
<dependency>
    <groupId>org.glassfish.jersey.media</groupId>
    <artifactId>jersey-media-json-jackson</artifactId>
    <version>${jersey.version}</version>
</dependency>
<dependency>
    <groupId>org.glassfish.jersey.ext</groupId>
    <artifactId>jersey-spring3</artifactId>
    <version>${jersey.version}</version>
</dependency>
<dependency>
    <groupId>org.glassfish.jersey.ext</groupId>
    <artifactId>jersey-bean-validation</artifactId>
    <version>${jersey.version}</version>
</dependency>
<dependency>
    <groupId>org.glassfish.jersey.test-framework.providers</groupId>
    <artifactId>jersey-test-framework-provider-grizzly2</artifactId>
    <version>${jersey.version}</version>
    <scope>test</scope>
</dependency>

Generated code structure

For every root resource, a JAX-RS resource interface generates to the target/generated-sources/api folder that is included automatically in the compilation phase. The required data transfer objects (DTOs) also generate in the target/generated-sources/api folder, as well as the Parameter classes for traits defined in the RAML definition. The resource classes implementing the generated interfaces also generate once to the src/main/java folder, where the actual logic of the resource methods is implemented. These classes do not regenerate if they already exist, so the plug-in never deletes your code.

The generate-service goal also generates the jUnit integration test classes for you. They are placed in the package specified by outputPackageName, inside the src/test/java directory. These files generate:

  • AbstractResourceTest extending JerseyTest – The base abstract class for all the tests, which provides the basic configuration for the Jersey test

    The Jersey Test Framework allows you to write integration tests for RESTful applications without needing to deploy the application during the test run.
  • A test class for each resource, extending the AbstractResourceTest class

A single test class for a given resource has test methods that generate for each resource method, together with a simple implementation that calls the resource and checks the response code. You can upgrade and modify these tests in any way that suits your needs. These test classes do not regenerate if they already exist, similar to the classes generated in the src/main/java folder. The main purpose of these tests is to provide a template so that developers can quickly start writing and implementing the tests.

Asynchronous processing support

JAX-RS and Jersey support asynchronous processing, as does the Service Generator. To enable service generation that is compliant with the JAX-RS asynchronous specification, set the async parameter to true for the service-sdk-generator-maven-plugin. If the async parameter is enabled, a generated method signature looks similar to this:

@javax.ws.rs.Path("/wishlists")
public interface WishlistsResource
{
    @javax.ws.rs.GET
    void get(@Suspended final AsyncResponse asyncResponse);
         ....

This example differs from the synchronous signature in the return type (void) and has the AsyncResponse instance as an additional parameter.

For more information about asynchronous processing, see the Jersey Reference Documentation.

Changing the value of the async flag affects the generated code. Be aware of code compilation issues for resource implementations, which generate only once.

Multiple APIs in one project

If your service needs to expose multiple, independent APIs, then you can generate JAX-RS endpoints for all of them by using multiple independent executions of the service-sdk-generator-maven-plugin. Specify the following information:

  • Specify an id for each execution, which must be unique among all executions of the service-sdk-generator-maven-plugin.
  • For each execution, configure a distinct sourceRamlFile that represents the respective RAML API definition.
  • Configure a different outputPackageName for each execution. This prevents naming conflicts among multiple generated JAX-RS endpoints and DTOs.

Here is an example of a service-sdk-generator-maven-plugin configuration for multiple APIs:

    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <executions>
            <execution>
                <id>service-api</id>
                <goals>
                    <goal>generate-service</goal>
                </goals>
                <configuration>
                    <outputApiTitle>Api</outputApiTitle>
                    <outputPackageName>my.project.api</outputPackageName>
                </configuration>
            </execution>
            <execution>
                <id>service-management-api</id>
                <goals>
                    <goal>generate-service</goal>
                </goals>
                <configuration>
                    <sourceRamlFile>src/main/webapp/meta-data/management/api.raml</sourceRamlFile>
                    <outputApiTitle>ManagementApi</outputApiTitle>
                    <outputPackageName>my.project.management.api</outputPackageName>
                </configuration>
            </execution>
        </executions>
    </plugin>


Mime Types

Multiple response types

If a resource method defined in RAML has many response types, only one Java method generates that has all types configured with the @Produces annotation. The default implementation of the method returns this:

  • A DTO – If all response types are using the same schema
  • A string – If at least two types are using different schemas, or at least one type has no schema

Multiple accepted types

If a request accepts many media types, then multiple methods generate:

  • Two media types with different schemas result in generating two methods.
  • Two media types with the same schema result in one method having both types listed in the @Consumes annotation.

For the types application/json, application/xml, or text/plain, the method accepts:

  • A DTO – If the schema is defined for the method of a given type
  • A string – If no schema is defined for this type, or if the schema type String is used

A media type different from those three results in the no body input parameter. In this case, use the HttpServletRequest attribute to get the plain input stream. In this example, there is a POST method with this definition:

/products:
 put:
  body:
    application/json:
      schema: icke
    application/xml:
      schema: icke
    text/plain:
      example: "test"
  responses:
    200:
      body:
        application/json:
          schema: icke
        text/plain:
          example: "test"
 get:
  responses:
    200:
      body:
        application/json:
          schema: icke
        application/xml:
          schema: icke

The previous example generates the following methods:

@javax.ws.rs.Path("/wishlist/{version}/users/{userid}/products")
public interface ProductsResource
{
    @Path("")
    @javax.ws.rs.PUT
    @javax.ws.rs.Consumes({"application/json","application/xml"})
    @javax.ws.rs.Produces({"application/json","text/plain"})
    public Response put(  final org.training.wishlist.Icke icke);

    @Path("")
    @javax.ws.rs.PUT
    @javax.ws.rs.Consumes({"text/plain"})
    @javax.ws.rs.Produces({"application/json","text/plain"})
    public Response put(  final java.lang.String string);

    @Path("")
    @javax.ws.rs.GET
        @javax.ws.rs.Produces({"application/json","application/xml"})
    public Response get();
}

public class DefaultProductsResource implements ProductsResource
{
    @javax.ws.rs.core.Context
    private UriInfo uriInfo;

    @Override
    public Response put( final Icke icke)
    {
                // place some logic here and a fill proper response entity according to an accepted response type.
        return Response.ok("myEntity").build();
    }

    @Override
    public Response put( final String string)
    {
                // place some logic here and a fill proper response entity according to an accepted response type.
        return Response.ok("myEntity").build();
    }

    @Override
    public Response get()
    {
        final Icke DTO = new Icke();
        // place some logic here to fill in DTO.
        return Response.ok(DTO).build();
    }
}

Two methods generate for the PUT request. Because XML and JSON have the same schema, they are both covered in one method, while the text is handled in a separate method using String as the body parameter. Different method names are not required because the parameter list always differs. The PUT response uses a string as a dummy response object because it is not clear which DTO can be used for the response, as the text is also a possible response. In contrast, the GET response uses the Icke DTO, because possible response types have the same Icke schema.


Error Responses

DTO generation for error responses

DTOs are generated for all schemas defined in the schemas section of RAML as well as for inlined schema definitions of request and response bodies. One exception is done for response bodies related to error responses (having response code 4xx or 5xx). Here, DTOs are not generated for inline schema definitions. An error response in YaaS should always conform to the YaaS error response schema. By not generating a DTO for inline error response schemas, the generator enables you to specify the YaaS schema in an inline way, as shown in the following example:

    responses:
      500:
        body:
          application/json:
            schema: !include https://pattern.yaas.io/v1/schema-error-message.json

Be aware, however, that you will still have to reference this schema in the schemas section of your RAML in order to have at least one error message DTO generated:

    schemas:
    - errorSchema: !include https://pattern.yaas.io/v1/schema-error-message.json

If you need a specialized error schema for each of your error responses, you should declare them in the schemas section and reference them properly to still have multiple DTOs generated:

    schemas:
    - myErrorSchema: !include myErrorSchema.json
    - myOtherErrorSchema: !include myOtherErrorSchema.json
    ...
    responses:
      400:
        body:
          application/json:
            schema: myErrorSchema
      500:
        body:
          application/json:
            schema: myOtherErrorSchema


PATCH Method

The code generator can process PATCH methods. Because the PATCH method is not part of the JAX-RS specification yet, Jersey does not support this feature, and no code can be generated using pure Jersey features. Because the PATCH method support is available as a draft, the Service SDK provides an early integration of it for Jersey, located in the service-sdk-jersey-support library. The generated code is based on this library. This is an example of a typical PATCH method defined in RAML:

...
schemas:
 - json-patch: !include https://github.com/fge/sample-json-schemas/blob/master/json-patch/json-patch.json
...
/wishlists/{id}
    patch:
      headers:
        hybris-tenant:
          description: The ID of the calling tenant
          type: string
          required: true
          example: mySellerId
      body:
        application/json-patch+json:
          schema: json-patch
          example: "[{'op':'replace','path':'/sku','value':'foo'}]"
      responses:
        200:
          body:
            application/json:
              schema: wishlist

Limitations

  • The usage of the PATCH method introduces a dependency to the service-sdk-jersey-support library. If there are compilation problems after code generation, ensure that you have this library in your classpath.
  • The usage of the PATCH method requires a corresponding GET method in the same resource. For more information, see the PatchFeature section in the Jersey Support library documentation.
  • The default HTTP client used by the Jersey client does not support the PATCH method. In order to use the Jersey client, you need to use one of the provided client connectors, which all support PATCH. For more information, see the Jersey Client Connectors document. Because the generated test classes use the Jersey client, a connector must be used. By default, the generated test code uses the Grizzly connector. If a PATCH method is defined in RAML, it requires a dependency to the connector, as shown in this example:

    <dependency>
      <groupId>org.glassfish.jersey.connectors</groupId>
      <artifactId>jersey-grizzly-connector</artifactId>
      <version>${jersey.version}</version>
      <scope>test</scope>
    </dependency>
    
  • If the tests are generated for the first time, the PATCH feature of the library is enabled in the test application, and the Grizzly connector is configured for the client automatically. If you have existing tests, you might have to configure this feature manually in your test setup.


Additional Properties

The Service Generator supports the additionalProperties keyword in the JSON schemas according to the specification.

This keyword enables properties to be contained in a JSON document, which are not specified in the Properties section of the related JSON schema. To do so, specify additionalProperties in the JSON schema and an optional schema for the additional properties. The code generated by the Service Generator treats an absent keyword as if it does not have additional properties in a related JSON document. If you have a JSON schema similar to that shown in this example:

{
    "type":"object",
    "title":"SampleType",
    "properties":{
        "id":{"type":"string"},
        "name":{"type":"string"}
    }
}

And a JSON document similar to this example:

{
    "id":"123",
    "name":"testName",
    "size":5
}

In the previous examples, the document is not valid because the property size is not defined in the schema, and the schema is not enabled to have additional properties. To enable this feature, use this example:

{
    "type":"object",
    "title":"SampleType",
    "properties":{
        "id":{"type":"string"},
        "name":{"type":"string"}
    },
    "additionalProperties":{"type":"number"}
}

Additional number properties can be used in a document. The Service Generator supports this feature by adding the member variable additionalProperties of type Map to the generated DTO. The map leverages the schema definition provided and any optional additionalProperties definition. In the previous example, the map is of type Map<String,Number>.

public class SampleType
{
    private Map<String, String> additionalProperties;
    ...
}

If you have an additional set of properties in the JSON document, the Jackson Marshaller does not serialize and deserialize as you might expect. It results in an additional level of wrapping, and secondary properties exist within a separate JSON object. In this case, the getter and setter methods for the additionalProperties member are annotated with @JsonAnyGetter/@JsonAnySetter. The generated code for the JSON schema defined in the previous example looks similar to this:

public class SampleType
{
    private String id;
    private String name;
    private Map<String, String> additionalProperties;

    //getter and setter for id and name are skipped

    @JsonAnyGetter
    public Map<String, String> getAdditionalProperties()
    {
        return additionalProperties;
    }

    @JsonAnySetter
    public void setAdditionalProperties(final String key, String value)
    {
        if(this._additionalProperties == null)
        {
            this._additionalProperties = new LinkedHashMap<>();
        }
        this._additionalProperties.put(key,value);
    }
}


OneOf Properties

The Service Generator supports the oneOf validation keyword in JSON schemas, according to the specification. This keyword enables properties in the JSON Schema to be defined so that their type can be changed between different alternatives. This is done using the oneOf keyword as in this example:

"properties": {
    "description":
        {
            "oneOf": [
                {
                    "type": "string"
                },
                {
                    "type": "object",
                    "additionalProperties": true
                }
            ]
        }
    }
}

A conforming JSON object might look like this:

{
    "description":"Hi. I'm a simple description."
}

Alternatively, it might look like this:

{
    "description":{
        "en":"Howdy. I'm a more complex description. This is my English representation.",
        "de":"Und dies ist meine deutsche Variante.",
        "foobar":"In fact, the description can consist of any additional properties in this example."
    }
}
The oneOf keyword is not object-oriented in the strict sense. It enables alternation between types that are not part of a common hierarchy. Thus, the receiver of the conforming JSON data cannot make any reasonable assumptions about its structure without actually analyzing the contents. Consequently, the oneOf keyword does not map easily to a strongly typed, object-oriented language such as Java.

Generated data transfer objects (DTOs)

The service-generator supports the oneOf keyword when generating DTOs. It makes the use of oneOf properties as simple and straightforward as possible. In the previous example, these methods are generated in the corresponding DTO:

// type-safe setter for the String alternative of the oneOf property "description"
public void setDescription(final String description)

// type-safe setter for the Map alternative of the oneOf property "description"
public void setDescription(final Map<String, Object> description)

// generic getter for the oneOf property "description" that tries to cast its value to T
public <T> T getDescription() throws ClassCastException

// convenience method that checks whether the oneOf property "description" is assignable to the given type public boolean isDescriptionTypeOf(final Class<?> clazz) throws IllegalArgumentException

// convenience method to check whether the oneOf property "description" has a non-null value
public boolean hasDescription()

Similar methods are available for any other oneOf property in the DTO. For more detailed information on these methods, see the JavaDocs.

Resolve ambiguities during deserialization

The alternative types of a oneOf property might not be distinct enough, such as if they define objects having similar properties or if they define primitive types that can be converted between each other. Consider these types in your RAML API definition:

schemas :
  - foo : |
    {
      "$schema" : "http://json-schema.org/draft-04/schema#",
      "type" : "object",
      "properties" : {
        "name" : { "type" : "string" },
        "code" : { "type" : "string" }
      }
    }
  - bar : |
    {
      "$schema" : "http://json-schema.org/draft-04/schema#",
      "type" : "object",
      "properties":{
        "name": { "type" : "string" },
        "quantity": { "type" : "number" }
      }
    }
  - fooOrBar : |
    {
      "oneOf": [
        { "@ref" : "foo" },
        { "@ref" : "bar" }
      ]
    }

When a property is declared to be of type fooOrBar, its JSON data can match more than one alternative. For example, this JSON object:

{
  "name" : "stuff"
}

This could be interpreted either as a JSON object of type foo or as a JSON object of type bar. However, deserialization logic needs to map the JSON object to one particular Java class. The Service Generator provides DTOs that resolve such ambiguities by cooperating with JSON deserialization logic. They enforce the following rule:

If more than one oneOf alternatives matches the JSON value, ordering inside the JSON Schema definition is taken into account. The JSON value is deserialized to the type that is mentioned first in the list of oneOf alternatives.

Therefore, the JSON object from the previous example is deserialized to a Java representation of type foo. It is mentioned before type bar in the oneOf keyword. Keep ordering in mind whenever you use the oneOf keyword, as ambiguities can occur.


Reusing RAML Patterns

YaaS provides the RAML Pattern Library with RAML resource types and traits that you can remotely reference in your RAML file. These patterns simplify defining APIs, particularly the APIs conforming to Hybris-specific standards.

The advantage of using patterns, especially traits, is not only that it makes the RAML definition more readable, but you can also define reusable elements. The Service Generator uses these elements to make the generated code even more readable by grouping the parameters of all method signatures. The Service Generator is influenced by traits in the following areas:

  • For every trait, a DTO class is generated.
  • The parameters in resource method signatures are grouped by DTO classes.

This also applies to patterns and traits that are defined locally and not defined in the SAP Hybris RAML Pattern Library.

Traits as DTO classes

For every trait defined locally or remotely in a RAML file, a DTO class suffixed with Parameters is generated in the base folder for generated Java sources. The class has accessors for all header and query parameters of requests defined in the trait. All accessors are annotated with the proper validation constraints defined for the parameters in the trait. For example:

paged:
      queryParameters:
        pageNumber:
          type: integer
          minimum: 1
        pageSize:
          type: integer
          minimum: 1

The trait definition example results in a generated DTO class having two accessors for the defined query parameters with all validation and Jersey annotations in place. For example:

public class PagedParameters
{
    @javax.ws.rs.QueryParam("offset")
    @javax.validation.constraints.DecimalMin("1")
    private java.lang.Integer offset;

    @javax.ws.rs.QueryParam("limit")
    @javax.validation.constraints.DecimalMin("1")
    private java.lang.Integer limit;
...
}

The advantage of these classes is that they can be used at the resource method signatures to improve their readability.

Traits in resource method signatures

The method signatures of resources generated from a RAML definition can easily get disorganized by having a long parameter list, all with validation constraints. Typically, the parameter list is use-case driven and reoccurs in other methods. For example, query parameters for paging typically occur in every collection getter method. This not only complicates signatures, it also duplicates code and logic. Maintenance also becomes more labor-intensive, such as updating constraints for parameters defined for several methods.

The solution for a RAML definition is a trait concept. A trait defines a reusable pattern that applies to many methods. The method definition is more concise, and you maintain the parameters in one place: the trait. This approach is also extended to the service generation, using the grouping approach for the signature generation. When a trait gets applied to a method, the related method signature does not list the separate parameters. Instead, it uses the generated DTO class as a parameter. Therefore, the number of parameters is shorter, and the validation annotations are contained in the DTOs. This keeps the signature clean, and the DTOs are reused in all methods where the trait is applied.

The generator references a DTO class instead of injecting a parameter directly as soon as a trait defines a parameter. If a method applies the yaasAware and the paged trait and defines a custom parameter as shown in this example:

traits:
  - !include https://pattern.yaas.io/v1/trait-paged.yaml
  - !include https://pattern.yaas.io/v2/trait-yaas-aware.yaml

/wishlists:
  is: [yaasAware]
  get:
    is: [paged]

    queryParameters:
        nullable:
          type: boolean

The generated method signature looks similar to this example:

@javax.ws.rs.Path("/wishlists")
public interface ProductsResource
{
    @javax.ws.rs.GET
    Response get(@BeanParam @Valid YaasAwareParameters yaasAware,
                 @BeanParam @Valid PagedParameters paged,
                 @QueryParam("nullable") Boolean nullable);

            ....

The parameters defined in the traits do not destroy the method signature. Instead, the use case-oriented DTOs are used, while parameters customized to the specific method generate separately. Consequently, a redefinition of a trait parameter at the method level must be done. A different use case is described for that method with the related DTO class that is not used for generation. Instead, all parameters of the trait are injected separately to the method, including the redefined parameter.

The usage of the Parameters classes in a method signature takes place only when no trait parameter is redefined at the method level.

Resource types

There are two RAML resource types provided in the RAML Pattern Library, which are usually used in conjunction:

  • collection: Used for a REST resource to express that it deals with a collection of elements
  • element: Used for a sub-resource of a collection to express that it represents an element of the collection

The respective resource type automatically assigns the appropriate schema types to the actions of the REST resource, based on the resource's name. This encourages RESTful design, clear naming, and keeps your APIs consistent. It even adds appropriate RAML documentation. For example, it is common practice to have a POST action defined on a collection resource to add to the collection or a PUT method defined on an element resource to modify the element.

An important feature of the aforementioned resource types is that they use the language transformation functions of RAML when making assumptions about the name of the schema type to use. For example, if you declare an /items resource to use the collection resource type, the schema name for GET responses is items, because GET is meant to retrieve multiple elements of the collection. However, the schema name for the request body of a POST on the /items resource is the singularized form item, because a POST on a collection is meant to add a single element. Similarly the element resource type always expects a schema name in singularized form, because all of its actions are intended to work on a single element. In summary, when using these resource types in your RAML definition, pay attention to the naming of your resources and choose English nouns that have different plural and singular forms. If that is not possible, manually specify all schema definitions, which the resource types would otherwise assign.


Bean Validation Support

The Service Generator supports different kinds of constraints definable in an API, such as a mandatory attribute, using the annotation of the Bean Validation specification in the generated code. By enabling the Bean Validation framework in your project, you get out-of-the-box validation of the defined constraints.

In general, there are two categories of constraints. One category is specified in the JSON schema definition, which affects the generated DTO classes. The RAML Specification specifies the second category, which affects the generated Jersey endpoints. Because the RAML and JSON schema specifications are more expressive than what is easily validated using the Bean Validation specification, not all constraints are currently supported.

Bean Validation is disabled by default for the generated tests so that the tests pass with the first code generation. With the AbstractResourceTest annotation, you can enable the validation for all your tests, including these.

JSON schema constraints

The generator supports the following constraints of a JSON schema definition, and the constraints generate a 400 Bad Request response code when using the default exception mapping for validation exceptions:

TypePossible Constraints
stringmaxLength
minLength
pattern
format: email
integerminimum
exclusiveMinimum
maximum
exclusiveMaximum
arrayminimum
minItems
* maxItems


For more information about what these constraints mean, refer to the JSON Schema Validation Specification. In addition, you can specify an array of required attributes at the top level.

The following values for the format element applied to a string type influence the attribute type of the generated DTO class with the following implicit conversion and validation:

Format ValueGenerated Attribute Type
urijava.net.URI
regexjava.util.regex.Pattern
date-timejava.util.Date
utc-milliseclong
emailString, with hibernate's @email annotation


The available JSON schema constraints are illustrated in this example:

vehicle.json
{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "id" : {
      "type" : "string",
      "maxLength" : "5",
      "minLength" : "3",
      "pattern" : "[a-zA-Z]{1}.+",
      "required" : "true"
    },
    "numberOfDoors" :  {
      "type" : "integer",
      "maximum": "7",
      "exclusiveMaximum": "true",
      "minimum" : "2",
      "exclusiveMinimum":"false"
    },
    "owner" : {
      "type" : "string",
      "format": "email"
    },
    "doors" : {
      "type" : "array",
      "items" : {"type" : "string"},
      "minItems" : "1",
      "maxItems" : "4"
    }
  },
  "required": [ "numberOfDoors"]
}

RAML named parameter constraints

RAML enables you to provide a set of attributes, or named parameters, for properties modeled in the RAML content. These properties can be queryParameters, headers, or uriParameters(pathParameter). For more details, refer to the RAML Specification. An overview of the currently supported RAML validations and the response codes a violation generates are listed in this table:

RAML ElementTypeResponse Code
uriParametersstringnumberintegerboolean404
minLength
maxLength
minimum
maximum
minimum
maximum
400
pattern
enum
required
minimum
maxiumum
requiredrequired404
headersstringnumberintegerbooleandate400
pattern
minLength
maxLength
required
enum
minimum
maximum
required
minimum
maximum
required
requiredrequired400
queryParametersstringnumberintegerbooleandate400
minLength
maxLength
enum
pattern
required
minimum
maximum
required
minimum
maximum
required
requiredrequired400


  • Type - Provides implicit run-time validation. The generator produces a strongly typed parameter in the generated signatures of a REST resource, which enables underlying Jersey implementation, such as a validation to decide if the given call is available or can't be clearly determined.

If you define a query parameter as type string, such as this:

/SomeResource:
  queryParameters:
    myParam:
      type: string

The related query parameter in the generated method signature is of the type string:

@Get
Response get( @QueryParam("myParam") String myParam);

You can call the endpoint with any character list:

http://some-hostname/SomeResource?myParam=foo
http://some-hostname/SomeResource?myParam=1

You can model the method in a stricter manner using the type integer. For example:

/SomeResource:
  queryParameters:
    myParam:
      type: integer

The generated parameter is more of a restrictive type, and Jersey does an automatic conversion. If the conversion is not possible and no matching resource endpoint is found, the call fails with a 404 error:

http://some-hostname/SomeResource?myParam=foo
Why 404 and not 400? There is a 404 error when extracting parameters from the URI because it is a matching error with the URI, such as when a string cannot be converted to an integer. For more information, see Illegal Exceptions Error.
  • Enum - Restricts the possible values for a property of type string to a specific set of values. As a result, the generated signature is annotated with a pattern constraint defining the possible values. For example:
/User{role}:
  uriParameters:
    role:
      type: string
      enum: ["Admin","Customer"]
  get:
     headers:
       someHeader:
        type: string
        enum: ["value1","value2"]

The example results in this generated signature:

@GET
Response get(
   @Pattern(regexp="(Admin)|(Customer)") @PathParam("role") String role,
   @Pattern(regexp="(value1)|(value2)") @HeaderParam("someHeader") String someHeader);
  • pattern, minLength, maxLength, minimum, maximum, and required - Restrict headers or queryParameters values as defined in the RAML specification. The generated code validates the constraints on the basis of generated validation annotations. This RAML example illustrates the parameters, with the resulting annotations generated:
queryParameters:
        pageNumber:
          type: integer
          required: true
          minimum: 1
          maximum: 2
        id:
          minLength: 2
          maxLength: 5
          pattern: "[a-zA-Z]*"

The example RAML snippet results in method parameters generated, along with these annotations:

@GET
Response get(
  @NotNull DecimalMin(value="1") @DecimalMax(value="2") @QueryParam("pageNumber") Integer pageNumber
  @Size(min=2,max=5 @Pattern(regexp="[a-zA-Z]*") @QueryParam("is") String id);

Any violations of these constraints generate a 400 Bad Request error.

Because RAML is based on YAML, always use proper YAML syntax. For instance, an expression such as pattern: [a-z]* is not a valid YAML snippet and must use proper formatting, such as pattern: "[a-z]*".
Bean Validation is functionally transparent, regardless of whether the REST API is synchronous or asynchronous. In an asynchronous approach, the method signatures are slightly different from the signatures shown previously in the parts of the signature not related to validation.


Multipart Support

Currently, the service-generator does not explicitly support multipart data. However, you can still use multipart data in your RAML definition and use the generated JAX-RS annotated class.

If your RAML file contains an endpoint definition like the following:

/test:
    post:
        body:
            multipart/form-data:
                formParameters:
                    file:
                        type: file
                    text:
                        type: string

Then the method signature generated in the JAX-RS class does not contain any body parameters.

@Singleton
public class DefaultTestResource implements TestResource
{
    @Context
    private UriInfo uriInfo;

    /* POST / */
    @Override
    public Response post()
    {
        return Response.created(uriInfo.getAbsolutePath()).build();
    }
}

To access the content of the multipart form inside the generated method, perform the following steps:

  1. Add a dependency to Jersey's multipart library:
    <dependency>
     <groupId>org.glassfish.jersey.media</groupId>
     <artifactId>jersey-media-multipart</artifactId>
     <version>${jersey.version}</version>
    </dependency>
    
  2. Register the MultiPartFeature in your application:
    register(MultiPartFeature.class);
    
  3. Add a member variable of the type ContainerRequest to your class annotated with @Context:
    @Context
    private ContainerRequest request;
    
  4. Remove the @Singleton annotation from the resource class, as the ContainerRequest variable is not injected per the request, but rather per the resource class instantiation.
  5. Get form data from the request variable within the method related to the multipart request:
public class DefaultTestResource implements TestResource
{
    @Context
    private UriInfo uriInfo;

    @Context
    private ContainerRequest request;

    /* POST / */
    @Override
    public Response post()
    {
        final FormDataMultiPart part = request.readEntity(FormDataMultiPart.class);
        final String field = part.getField("text").getValue();
        System.out.println("My Form: text="+field);
        return Response.created(uriInfo.getAbsolutePath()).build();
    }
}


JS Client Generation

General concept

The API definition of your service written in RAML is the source of server- and client-side contracts that specify how to interact with your service.

The latest API Console that MuleSoft provides integrates with other products from the same company, namely the raml-client-generator. You can use this tool to dynamically generate a JavaScript client for your service. You can do this directly in the API Console.

Authenticate the client

This offering is dedicated for the Client Credentials flow. Currently, it does not provide out-of-the-box OAuth2 authentication token maintenance, such as obtaining, caching, and refreshing of the access token.

Generate the client

After loading your service in the API Console, click DOWNLOAD API CLIENT in the upper-right corner.

If you do not want to use the client generator integrated with the API Console, you can use it independently.

To generate client code, you can use the standalone JS client generator. To do this, follow the provided RAML client generator guide.

After successful installation, you should be able to run the command. For more information, see the raml-to-client command help.

If you want to generate the client against the remote API, download the RAML locally because raml-to-client is not capable of operating on a remote URI. For example, to retrieve the RAML for the Configuration service:

wget https://api.beta.yaas.io/hybris/configuration/v1/meta-data/api.raml -O configuration-service-client.raml
 raml-\to\-client you-api.raml --output your-target-client-folder --language javascript

After successful generation, you should find the client package in the provided output directory.

The generated client package should consist of:

 .
   ├── README.md
   ├── index.js
   └── package.json

Generated code explanation

  • package.json - Used mainly to integrate with dependency management tools
  • README.md - Technical documentation adjusted with RAML's specific use cases
  • index.js - Client code

After you generate the JS client-side code, you can continue with integrating the JS client code with your application, described in the following section.

The generated module name for the package.json file is based on RAML's title attribute, so you might need to change it in the RAML if you want to name the client package module differently. For the purpose of this tutorial, the RAML's title attribute is changed to:

#%RAML 0.8 title: Configuration Service Client version: v4

Integrate the client

The integration steps depend on your local JS environment setup. The steps presented that follow are appropriate for the NodeJS approach.

Set the dependency

Whichever dependency management you use, you must inform your service about the dependency on the generated client. This example installs it locally with npm:

  npm install path/to/folder/containing/configuration-service-client --save

Then reference it within package.json:

   {
     "name": "yourAppName",
     "version": "0.0.0",
    ...
     "dependencies": {
       ...
       "configuration-service-client" : "0.0.0",
       ...

     }
   }

Implementation

The implementation details, such as how to integrate with client code, are described within the generated README.md file. There are helpful examples showing how to pass additional parameters and headers and set the payload. It also explains how to handle normal and abnormal responses. All this is presented in an endpoint-by-endpoint manner reflecting the API being defined in the RAML.

Troubleshooting

If you encounter the following error:

global.Promise is undefined and should be polyfilled. Check out https://github.com/jakearchibald/es6-promise for more information.

You require the es6-promise module and must explicitly call the polyfill method:

require('es6-promise').polyfill();

Additionally, add the dependency to the es6-promise module:

{
   ...
   "dependencies": {
     ...
     "es6-promise": "~2.0.1",
     ...
   }
}


Generating Clients

The Service SDK provides a code generator that emits client-side Java code tailored to calling the service API, based on the RAML API definition of a service. Currently, the focus of client generator is strictly on the client side. To generate server-side JAX-RS endpoints for your RAML API, see the Service Generator from the Service SDK.

A generated client is always specific to the service it is designed to call. It is based on the RAML file, which defines the service API. If the called service is using the YaaS Service SDK, its RAML will be available under {service base path}/meta-data/api.raml path. Generated client code follows the builder pattern and builds on top of the Jersey JAX-RS client library, which provides a host of useful features for the developer.

Benefits

Having a client generated for a service by the client generator has many benefits compared to manual client implementation. These benefits include:

  • API contract is embedded in the generated Java code.
  • Full code assistance support in any IDE, when accessing service resources and invoking operations.
  • Generated JavaDocs guide you through the process while you are learning to use the client.
  • Dedicated, type-safe methods for setting headers and query parameters.
  • Access to nested REST resources is intuitive and straightforward.
  • Compile time validation of your application code ensures that there are no issues with non-existing API resources being called or misspelled header names.
  • Supports enforcing strict contracts, but also enables you to set arbitrary headers, query parameters, or predefined Jersey client instances to be wrapped inside the generated client.
  • Supports developers while offering the flexibility to develop custom solutions.


Maven Plug-in

Use the service-sdk-generator-maven-plugin to configure and invoke code generation for clients.

The following configuration properties are available for the plug-in:

  • sourceRamlUri – The URI of a remote RAML API definition, used to generate the client.
    If the service for which you want to generate a client uses the YaaS Service SDK, you can find its RAML file inside the static /meta-data/ directory at the root path of that service. The default location of the RAML API definition is /meta-data/api.raml. It is also recommended to append an expand query parameter with the value compact to the RAML URL for clients that use the Service SDK. This accelerates the client generation process by fetching all the external resources on the service side and sending them as a single file. You can read more about the expand query parameter in the RAML Rewriter Filter documentation.
  • sourceRamlFile – The file path of a local RAML API definition. This is an alternative to sourceRamlUri and is preferred for local files.
  • basePackage – The base Java package for the generated client classes. The default value is com.sap.cloud.yaas.api.
  • targetFolder – The directory where the generated client code is located. This directory is registered as a Maven source directory, so it becomes available to subsequent build steps, such as compilation. The default value is target/generated-sources/client.
  • targetSupportFolder – The directory location of common support code for generated clients. The sources are in the com.sap.cloud.yaas.rammler.commons package, which contains Java classes that are common for all generated clients. Without this library, your generated clients do not compile or run. This directory is registered as a Maven source directory, so it becomes available to subsequent build steps, such as compilation. The default value is target/generated-sources/client-support.

Base plug-in configuration

To integrate client generation into your Maven project, add the service-sdk-generator-maven-plugin to your pom.xml file in the build > plugins section:

    <plugin>
         <groupId>com.sap.cloud.yaas.service-sdk</groupId>
          <artifactId>service-sdk-generator-maven-plugin</artifactId>
         <version>${service-sdk.version}</version>
    </plugin>

Generate a client by command line

You can now generate the client for a service using the following Maven command:

mvn servicegenerator:generate-client -Dclient-generator.input.ramluri=URL-of-RAML-API -Dclient-generator.output.folder=src/main/java -Dclient-generator.output.supportfolder=src/main/java

Where URL-of-RAML-API is the URL that points to the RAML describing the API of the service for which to generate the client code.

Define the target folder and target support folder so the code does not generate in the build folder, which is cleaned every time you run the mvn clean command.

Generate a client on demand

Using Maven, you can preconfigure a plug-in to execute it more easily. To generate a client on demand using a preconfigured plug-in, add this configuration in the build > pluginManagement section:

   <pluginManagement>
        <plugin>
            <groupId>com.sap.cloud.yaas.service-sdk</groupId>
            <artifactId>service-sdk-generator-maven-plugin</artifactId>
            <version>${service-sdk.version}</version>
            <configuration>
                <sourceRamlUri>${URL-of-RAML-API}</sourceRamlUri>
                <targetFolder>src/main/java</targetFolder>
                <targetSupportFolder>src/main/java</targetSupportFolder>
            </configuration>
        </plugin>
   </pluginManagement>

Generate a client on every build

Configuring Maven to run client generation on every build ensures that the generated code is "synchronized" with the given RAML source. The disadvantage is that any change to the generated code is lost in the next build. To run generation on every build, configure Maven as follows in the build > plugins section:

    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <executions>
            <execution>
                <goals>
                    <goal>generate-client</goal>
                </goals>
                <configuration>
                    <sourceRamlUri>${URL-of-RAML-API}</sourceRamlUri>
                </configuration>
            </execution>
        </executions>
    </plugin>

This configures the service-sdk-generator-maven-plugin to generate the client code for the service described at ${URL-of-RAML-API} during your project build process.

Generate multiple clients on demand

If your application needs to call the APIs of multiple services, generate multiple clients in one Maven project using multiple independent executions of the service-sdk-generator-maven-plugin. Additionally, specify the following:

  • Specify an id for each execution that is unique among all executions of the service-sdk-generator-maven-plugin.
  • For each execution, configure one sourceRamlUri or sourceRamlFile that points to the RAML API definition used to generate a client.
  • Configure a different basePackage for each execution. This prevents naming conflicts among multiple generated clients.

To preconfigure the generation of multiple clients, after making sure you have the base plug-in configuration, add the following to the build > pluginManagement section:

   <pluginManagement>
        <plugin>
            <executions>
                <execution>
                    <id>cats-client</id>
                    <configuration>
                        <sourceRamlUri>https://cheezburger.has.can.i/meta-data/api.raml?expand=compact</sourceRamlUri>
                        <basePackage>my.project.cats.client</basePackage>
                        <targetFolder>src/main/java</targetFolder>
                        <targetSupportFolder>src/main/java</targetSupportFolder>
                    </configuration>
                </execution>
                <execution>
                    <id>doggz-client</id>
                    <configuration>
                        <sourceRamlUri>https://doggz.example.net/meta-data/api.raml?expand=compact</sourceRamlUri>
                        <basePackage>my.project.doggz.client</basePackage>
                        <targetFolder>src/main/java</targetFolder>
                        <targetSupportFolder>src/main/java</targetSupportFolder>
                    </configuration>
                </execution>
            </executions>
        </plugin>
   </pluginManagement>

You can then generate a client using the Maven command line. For example, to generate the cats-client, run:

    mvn servicegenerator:generate-client@cats-client

Where cats-client is the ID of the execution that preconfigures the plug-in execution to generate the client for the cats service.

Executing a Maven plug-in execution by ID from the command line is available only in Maven v3.3.1 and higher.

Generate multiple clients on every build

The following is an example of a service-sdk-generator-maven-plugin configuration for multiple clients that ensures the generated code is "synchronized" with all RAML sources.

    <plugin>
        <groupId>com.sap.cloud.yaas.service-sdk</groupId>
        <artifactId>service-sdk-generator-maven-plugin</artifactId>
        <version>${service-sdk.version}</version>
        <executions>
            <execution>
                <id>cats-client</id>
                <goals>
                    <goal>generate-client</goal>
                </goals>
                <configuration>
                    <sourceRamlUri>https://cheezburger.has.can.i/meta-data/api.raml?expand=compact</sourceRamlUri>
                    <basePackage>my.project.cats.client</basePackage>
                </configuration>
            </execution>
            <execution>
                <id>doggz-client</id>
                <goals>
                    <goal>generate-client</goal>
                </goals>
                <configuration>
                    <sourceRamlUri>https://doggz.example.net/meta-data/api.raml?expand=compact</sourceRamlUri>
                    <basePackage>my.project.doggz.client</basePackage>
                </configuration>
            </execution>
        </executions>
    </plugin>

If you need service generation, you can add it as an additional execution with the generate-service goal and service generation configuration.


Programming

Once you have used the service-sdk-generator-maven-plugin to generate an API client, as described in the Client Generation part of the documentation, invoke it from your Java code. This section describes how to use a generated client programmatically. Most of the following example code snippets assume that the client was generated from the following RAML definition:

#%RAML 0.8
title: cat API
baseUri: http://api.hybris.com/cats
version: v1
mediaType: application/json

/cats:
  post:
    description: Register a new cat
    body:
      application/json:
    responses:
      201:
        body:
          application/json:

  /{catId}:
    uriParameters:
      catId:
        type: integer
    get:
      description: Get details of a cat
      responses:
        200:
          body:
            application/json:
        404:
          description: Cat not found
          body:
            application/json:

Client instantiation

For every generated client, there's a single Java class that serves as an entry point to the client's API. It resides in the basePackage, and its name is derived from the title of the API with the suffix Client attached. Using the preceding RAML example, instantiate the client like this:

CatApiClient client = new CatApiClient(CatApiClient.DEFAULT_BASE_URI);

Navigate resources and actions

From the generated client, use its Builder methods to navigate to a specific API resource, and select the API action to request. Then fill in the request parameters, and execute the request:

Response response = client
        .cats()
        .catId()
        .prepareGet()
        .fillCatId(catId)
        .execute();

You do not have to specify values for all URI parameters when you navigate through the API resources. Instead, fill in the values later, using the respective fill... method of the Builder.

Supply arbitrary headers or query parameters

Header and query parameters are specified after choosing an HTTP action using a prepare... method. A RAML API can define different header and query parameters depending on the HTTP action. However, you can also specify your headers or query parameters before choosing the HTTP action. To specify arbitrary header or query parameters, use the withHeader(String, Object...) or withQueryParameter(String, Object...) method:

Response response = client
        .cats()
        .catId(catId)
        .prepareGet()
        .withHeader("Content-type", "application/xml")
        .execute();

This approach is useful to explicitly set a standard HTTP header, even though it is not defined in the service's RAML file.

Specify the HTTP action later

You can store the predefined Builder object and reuse it for different HTTP actions. Specify HTTP actions later in the builder chain, using the prepareAny method. However, this negates the benefits of the custom header and query parameter methods, as well as error callbacks. For example:

Response response = client
        .cats()
        .prepareAny()
        .withHeader("Content-type", "application/xml")
        .fillMethod(HttpMethod.GET)
        .execute();

Asynchronous calls

The generated client supports asynchronous execution of the REST call. To switch to asynchronous processing, change the execute method invocation to the queue method:

Future<Response> responseFuture = client
        .cats()
        .catId(catId)
        .prepareGet()
        .withHeader("Content-type", "application/xml")
        .queue();

The return type is now Future<Response> instead of Response.

Error callbacks

There are two kinds of error callback handlers. One that handles an error HTTP response, and other that handles a RuntimeException that occurs during a request or response processing.

Request processing exceptions

There is only one method for registering a handler for request processing exception, called onProcessingFailure. You can define it, for example, as shown:

Response response = client
        .cats()
        .catId(catId)
        .prepareGet()
        .onProcessingFailure(throwable->
            LOG.error("Unexpected exception: " + throwable.getMessage())
        )
        .execute();

Erroneous HTTP responses

A set of methods for setting handlers for erroneous HTTP responses are generated from the source RAML. Whenever the source RAML defines an erroneous HTTP response, for example, a response starting with 4xx or 5xx, a corresponding method is available.

For all other erroneous HTTP responses, an extra method called onDefaultErrorResponse is available. If no other callback is registered for an erroneous HTTP response at the moment of receipt, the system calls the callback that the onDefaultErrorResponse method registered, even if the response code is not present in the source RAML.

For example:

Response response = client
        .cats()
        .catId(catId)
        .prepareGet()
        .onNotFound(resp -> {
            throw new NotFoundException("Element not found");
        })
        .onDefaultErrorResponse(resp ->
            LOG.error("Unexpected error occurred: " + response.getStatus())
        )
        .onProcessingFailure(throwable ->
            LOG.error("Unexpected exception: " + throwable.getMessage())
        )
        .execute();

Be aware that the handler registered in the onProcessingFailure method does not catch the exceptions thrown inside handlers that the onNotFound and onDefaultErrorResponse methods registered.

Client authorization

Oftentimes when calling other YaaS services, you must provide an OAuth 2.0 access token in the Authorization header. If that header is declared in the service's RAML, the generated client code provides a corresponding builder method. Hence you can obtain an access token using the authorization library and pass it to the client like this:

Response response  = productClient
        .tenantProducts(tenant)
        .prepareGet()
        .withAuthorization(token)
        .execute();

However, obtaining the access token programmatically for each call can get rather cumbersome. Thus, the Service SDK provides a more convenient alternative, based on the OAuth2Filter from the authorization library. To use it, instantiate your generated client slightly differently. First, create a custom JAX-RS client, as described in the OAuth2Filter documentation. Then, pass that JAX-RS client to the constructor of your generated client. For instance, when working with a client generated for the Product service:

Client jaxRsClient = ClientBuilder
        .newBuilder()
        .register(new OAuth2Filter(accessTokenProvider))
        .build();

HybrisProductsApiClient productClient =
        new HybrisProductsApiClient(HybrisProductsApiClient.DEFAULT_BASE_URI, jaxRsClient);

Now you can use the productClient to build arbitrary requests, and it automatically obtains an OAuth 2.0 access token for you.

There is just one more thing that you'll likely have to take care of. Often, different requests to the same service require different authorization scopes. For each request that you build using your generated client, you can specify a custom AuthorizationScope by means of a JAX-RS request property. The Java constant OAuth2Filter.PROPERTY_AUTHORIZATION_SCOPE defines the name of the request property. You can apply this approach to the productClient as follows:

Response response  = productClient
        .tenantProducts(tenant)
        .prepareGet()
        .withRequestProperty(
                OAuth2Filter.PROPERTY_AUTHORIZATION_SCOPE,
                new AuthorizationScope(tenant))
        .execute();

For the above example, keep in mind that the authorization scopes also specify the tenant to work on, meaning the underlying YaaS project. When writing a multi-tenant service, always specify the tenant on a per-request basis.

Use the generated client in a multi-threaded context

Because the client follows the Builder pattern, the process of building and executing a request is thread-safe. A new, immutable Builder object is created each time a Builder method is called. Two threads cannot make simultaneous changes to a shared Builder object. Therefore, store a partially prepared request Builder object and later reuse it in different threads.

Multi-part support

The generated client code supports MultiPart content in general. Still, it is an optional feature that you must activate separately. To activate the MultiPart feature, first add Jersey's Maven dependency as shown:

<dependency>
    <groupId>org.glassfish.jersey.media</groupId>
    <artifactId>jersey-media-multipart</artifactId>
    <version>${jersey.version}</version>
</dependency>

The following code snippets show how to do MultiPart requests using a client generated for the Media service. When instantiating the generated client, pass a Jersey client with the MultiPartFeature registered:

Client jerseyClient = ClientBuilder.newClient().register(MultiPartFeature.class);
MediaRepositoryClient generatedClient = new MediaRepositoryClient(MediaRepositoryClient.DEFAULT_BASE_URI, jerseyClient);

Now you can create a Jersey form:

FormDataMultiPart multipart = new FormDataMultiPart();
StreamDataBodyPart filePart = new StreamDataBodyPart("file", myFileInputStream);
multipart.bodyPart(filePart);

You can use the Jersey form you create as the payload of your request:

generatedClient
    .tenant("bananas11")
    .client("bananas11.icke")
    .media()
    .preparePost()
    .withPayload(Entity.entity(multipart, multipart.getMediaType()))
    .execute();


  • Send feedback

    If you find any information that is unclear or incorrect, please let us know so that we can improve the Dev Portal content.

  • Get Help

    Use our private help channel. Receive updates over email and contact our specialists directly.

  • hybris Experts

    If you need more information about this topic, visit hybris Experts to post your own question and interact with our community and experts.