You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@pekko.apache.org by md...@apache.org on 2022/11/18 07:30:32 UTC

[incubator-pekko-http] branch main updated (e7326a7e1 -> 664f044d7)

This is an automated email from the ASF dual-hosted git repository.

mdedetrich pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-http.git


    from e7326a7e1 copy over settings from incubator-pekko (#7)
     new 256267169 Replace scalariform with scalafmt
     new 2db062119 Add .gitattributes to enforce unix line endings
     new 90f8bf7e6 format source with scalafmt, #8
     new 664f044d7 Fix paradox issues caused by scalafmt

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitattributes                                     |    5 +
 .github/workflows/validate-and-test.yml            |    4 +-
 .scalafmt.conf                                     |   77 +
 CONTRIBUTING.md                                    |    2 +-
 .../src/main/scala/akka/BenchRunner.scala          |    3 +-
 .../src/main/scala/akka/http/CommonBenchmark.scala |   24 +-
 .../http/impl/engine/ConnectionPoolBenchmark.scala |    9 +-
 .../http/impl/engine/HeaderParserBenchmark.scala   |    3 +-
 .../http/impl/engine/HttpEntityBenchmark.scala     |    3 +-
 .../impl/engine/ServerProcessingBenchmark.scala    |    6 +-
 .../engine/StreamServerProcessingBenchmark.scala   |   21 +-
 .../engine/http2/H2ClientServerBenchmark.scala     |   11 +-
 .../engine/http2/H2RequestResponseBenchmark.scala  |   18 +-
 .../engine/http2/H2ServerProcessingBenchmark.scala |    8 +-
 .../akka/http/impl/engine/ws/MaskingBench.scala    |    2 +-
 .../impl/model/parser/UriParserBenchmark.scala     |    3 +-
 .../unmarshalling/sse/LineParserBenchmark.scala    |    3 +-
 .../main/scala/akka/http/caching/LfuCache.scala    |   19 +-
 .../impl/settings/CachingSettingsImpl.scala        |    5 +-
 .../impl/settings/LfuCachingSettingsImpl.scala     |   16 +-
 .../scala/akka/http/caching/scaladsl/Cache.scala   |   11 +-
 .../http/caching/scaladsl/LfuCacheSettings.scala   |    3 +-
 .../server/directives/CachingDirectives.scala      |   27 +-
 .../server/directives/CachingDirectives.scala      |    8 +-
 .../akka/http/caching/ExpiringLfuCacheSpec.scala   |   49 +-
 .../server/directives/CachingDirectivesSpec.scala  |    4 +-
 .../scaladsl/HostConnectionPoolCompatSpec.scala    |    2 +-
 .../scala-2.13+/akka/http/ccompat/package.scala    |    4 +-
 .../akka/http/scaladsl/util/FastFuture.scala       |   42 +-
 .../scala-2.13-/akka/http/ccompat/package.scala    |   10 +-
 .../akka/http/scaladsl/util/FastFuture.scala       |   48 +-
 .../main/scala/akka/http/ParsingErrorHandler.scala |    5 +-
 .../engine/HttpConnectionIdleTimeoutBidi.scala     |   20 +-
 .../impl/engine/client/HttpsProxyGraphStage.scala  |  214 +-
 .../client/OutgoingConnectionBlueprint.scala       |  269 +--
 .../http/impl/engine/client/PoolInterface.scala    |   28 +-
 .../http/impl/engine/client/PoolMasterActor.scala  |   47 +-
 .../engine/client/pool/NewHostConnectionPool.scala |   71 +-
 .../http/impl/engine/client/pool/SlotState.scala   |   70 +-
 .../akka/http/impl/engine/http2/ByteFlag.scala     |    1 +
 .../akka/http/impl/engine/http2/FrameEvent.scala   |   68 +-
 .../akka/http/impl/engine/http2/FrameLogger.scala  |    9 +-
 .../scala/akka/http/impl/engine/http2/Http2.scala  |  113 +-
 .../http/impl/engine/http2/Http2AlpnSupport.scala  |   36 +-
 .../http/impl/engine/http2/Http2Blueprint.scala    |  100 +-
 .../http/impl/engine/http2/Http2Compliance.scala   |   30 +-
 .../akka/http/impl/engine/http2/Http2Demux.scala   |  235 ++-
 .../http/impl/engine/http2/Http2Multiplexer.scala  |    8 +-
 .../http/impl/engine/http2/Http2Protocol.scala     |   12 +-
 .../impl/engine/http2/Http2StreamHandling.scala    |  160 +-
 .../impl/engine/http2/HttpMessageRendering.scala   |   64 +-
 .../impl/engine/http2/IncomingFlowController.scala |    8 +-
 .../http2/OutgoingConnectionBuilderImpl.scala      |   72 +-
 .../akka/http/impl/engine/http2/PriorityTree.scala |   24 +-
 .../http/impl/engine/http2/ProtocolSwitch.scala    |   67 +-
 .../http/impl/engine/http2/RequestParsing.scala    |   82 +-
 .../http/impl/engine/http2/StreamPrioritizer.scala |   11 +-
 .../akka/http/impl/engine/http2/TelemetrySpi.scala |   12 +-
 .../engine/http2/client/PersistentConnection.scala |  323 +--
 .../impl/engine/http2/client/ResponseParsing.scala |   38 +-
 .../impl/engine/http2/framing/FrameRenderer.scala  |   40 +-
 .../engine/http2/framing/Http2FrameParsing.scala   |   27 +-
 .../engine/http2/hpack/HeaderCompression.scala     |  105 +-
 .../engine/http2/hpack/HeaderDecompression.scala   |  159 +-
 .../engine/http2/hpack/Http2HeaderParsing.scala    |    4 +-
 .../impl/engine/http2/util/AsciiTreeLayout.scala   |   29 +-
 .../http/impl/engine/parsing/BodyPartParser.scala  |   73 +-
 .../akka/http/impl/engine/parsing/BoyerMoore.scala |    4 +-
 .../impl/engine/parsing/HttpHeaderParser.scala     |  184 +-
 .../impl/engine/parsing/HttpMessageParser.scala    |  142 +-
 .../impl/engine/parsing/HttpRequestParser.scala    |  366 ++--
 .../impl/engine/parsing/HttpResponseParser.scala   |   66 +-
 .../http/impl/engine/parsing/ParserOutput.scala    |   33 +-
 .../parsing/SpecializedHeaderValueParsers.scala    |    3 +-
 .../akka/http/impl/engine/parsing/package.scala    |   13 +-
 .../impl/engine/rendering/BodyPartRenderer.scala   |   20 +-
 .../engine/rendering/DateHeaderRendering.scala     |   18 +-
 .../rendering/HttpRequestRendererFactory.scala     |  131 +-
 .../rendering/HttpResponseRendererFactory.scala    |  137 +-
 .../http/impl/engine/rendering/RenderSupport.scala |   14 +-
 .../impl/engine/server/HttpServerBluePrint.scala   |  894 +++++----
 .../http/impl/engine/server/ServerTerminator.scala |  191 +-
 .../UpgradeToOtherProtocolResponseHeader.scala     |    2 +-
 .../akka/http/impl/engine/ws/FrameEvent.scala      |   36 +-
 .../http/impl/engine/ws/FrameEventParser.scala     |   10 +-
 .../http/impl/engine/ws/FrameEventRenderer.scala   |   15 +-
 .../akka/http/impl/engine/ws/FrameHandler.scala    |   25 +-
 .../akka/http/impl/engine/ws/FrameLogger.scala     |   11 +-
 .../akka/http/impl/engine/ws/FrameOutHandler.scala |   32 +-
 .../scala/akka/http/impl/engine/ws/Handshake.scala |   69 +-
 .../scala/akka/http/impl/engine/ws/Masking.scala   |   78 +-
 .../impl/engine/ws/MessageToFrameRenderer.scala    |    2 +-
 .../scala/akka/http/impl/engine/ws/Protocol.scala  |   20 +-
 .../scala/akka/http/impl/engine/ws/Randoms.scala   |    1 +
 .../engine/ws/UpgradeToWebSocketLowLevel.scala     |    7 +-
 .../akka/http/impl/engine/ws/Utf8Decoder.scala     |    8 +-
 .../akka/http/impl/engine/ws/Utf8Encoder.scala     |  112 +-
 .../scala/akka/http/impl/engine/ws/WebSocket.scala |  134 +-
 .../impl/engine/ws/WebSocketClientBlueprint.scala  |   70 +-
 .../scala/akka/http/impl/model/JavaQuery.scala     |    7 +-
 .../akka/http/impl/model/UriJavaAccessor.scala     |    1 +
 .../impl/model/parser/AcceptCharsetHeader.scala    |    2 +-
 .../impl/model/parser/AcceptEncodingHeader.scala   |    4 +-
 .../akka/http/impl/model/parser/AcceptHeader.scala |    5 +-
 .../impl/model/parser/AcceptLanguageHeader.scala   |    2 +-
 .../http/impl/model/parser/Base64Parsing.scala     |    1 +
 .../impl/model/parser/CacheControlHeader.scala     |   26 +-
 .../http/impl/model/parser/CharacterClasses.scala  |    7 +-
 .../http/impl/model/parser/CommonActions.scala     |   30 +-
 .../akka/http/impl/model/parser/CommonRules.scala  |   68 +-
 .../model/parser/ContentDispositionHeader.scala    |   16 +-
 .../http/impl/model/parser/ContentTypeHeader.scala |   10 +-
 .../akka/http/impl/model/parser/HeaderParser.scala |   56 +-
 .../http/impl/model/parser/IpAddressParsing.scala  |   16 +-
 .../akka/http/impl/model/parser/LinkHeader.scala   |   52 +-
 .../http/impl/model/parser/SimpleHeaders.scala     |   23 +-
 .../akka/http/impl/model/parser/UriParser.scala    |   70 +-
 .../http/impl/model/parser/WebSocketHeaders.scala  |    6 +-
 .../settings/ClientConnectionSettingsImpl.scala    |   34 +-
 .../impl/settings/ConnectionPoolSettingsImpl.scala |   73 +-
 .../http/impl/settings/ConnectionPoolSetup.scala   |    6 +-
 .../impl/settings/HostConnectionPoolSetup.scala    |    1 -
 .../impl/settings/HttpsProxySettingsImpl.scala     |    8 +-
 .../http/impl/settings/ParserSettingsImpl.scala    |   77 +-
 .../impl/settings/PreviewServerSettingsImpl.scala  |    6 +-
 .../http/impl/settings/ServerSettingsImpl.scala    |   73 +-
 .../http/impl/settings/WebSocketSettingsImpl.scala |   16 +-
 .../http/impl/util/ByteStringParserInput.scala     |    3 +-
 .../scala/akka/http/impl/util/EnhancedConfig.scala |    2 +-
 .../scala/akka/http/impl/util/EnhancedString.scala |    2 +-
 .../scala/akka/http/impl/util/JavaAccessors.scala  |    1 +
 .../scala/akka/http/impl/util/JavaMapping.scala    |  102 +-
 .../akka/http/impl/util/LogByteStringTools.scala   |   48 +-
 .../akka/http/impl/util/One2OneBidiFlow.scala      |   90 +-
 .../main/scala/akka/http/impl/util/Rendering.scala |   27 +-
 .../http/impl/util/SettingsCompanionImpl.scala     |   16 +-
 .../akka/http/impl/util/SocketOptionSettings.scala |   14 +-
 .../http/impl/util/StageLoggingWithOverride.scala  |    1 +
 .../scala/akka/http/impl/util/StreamUtils.scala    |  240 +--
 .../main/scala/akka/http/impl/util/package.scala   |   70 +-
 .../scala/akka/http/javadsl/ClientTransport.scala  |   10 +-
 .../main/scala/akka/http/javadsl/ConnectHttp.scala |    5 +-
 .../akka/http/javadsl/ConnectionContext.scala      |   39 +-
 .../src/main/scala/akka/http/javadsl/Http.scala    |  267 +--
 .../akka/http/javadsl/IncomingConnection.scala     |   13 +-
 .../akka/http/javadsl/OutgoingConnection.scala     |    1 +
 .../http/javadsl/OutgoingConnectionBuilder.scala   |    1 +
 .../scala/akka/http/javadsl/ServerBinding.scala    |    4 +-
 .../scala/akka/http/javadsl/ServerBuilder.scala    |   16 +-
 .../akka/http/javadsl/model/ContentType.scala      |    2 +
 .../scala/akka/http/javadsl/model/MediaType.scala  |   11 +-
 .../javadsl/model/RequestResponseAssociation.scala |    3 +-
 .../scala/akka/http/javadsl/model/Trailer.scala    |    1 +
 .../scala/akka/http/javadsl/model/ws/Message.scala |   16 +-
 .../http/javadsl/model/ws/UpgradeToWebSocket.scala |    7 +-
 .../akka/http/javadsl/model/ws/WebSocket.scala     |    1 +
 .../http/javadsl/model/ws/WebSocketRequest.scala   |    3 +-
 .../http/javadsl/model/ws/WebSocketUpgrade.scala   |    7 +-
 .../model/ws/WebSocketUpgradeResponse.scala        |    3 +-
 .../settings/ClientConnectionSettings.scala        |   25 +-
 .../javadsl/settings/ConnectionPoolSettings.scala  |   14 +-
 .../javadsl/settings/Http2ClientSettings.scala     |   15 +-
 .../http/javadsl/settings/ParserSettings.scala     |   26 +-
 .../javadsl/settings/PreviewServerSettings.scala   |    1 +
 .../http/javadsl/settings/ServerSettings.scala     |   31 +-
 .../http/javadsl/settings/WebSocketSettings.scala  |    1 +
 .../scala/akka/http/scaladsl/ClientTransport.scala |   36 +-
 .../akka/http/scaladsl/ConnectionContext.scala     |   91 +-
 .../src/main/scala/akka/http/scaladsl/Http.scala   |  331 +--
 .../http/scaladsl/OutgoingConnectionBuilder.scala  |    1 +
 .../scala/akka/http/scaladsl/ServerBuilder.scala   |   19 +-
 .../akka/http/scaladsl/model/AttributeKeys.scala   |    1 -
 .../akka/http/scaladsl/model/ContentRange.scala    |    7 +-
 .../akka/http/scaladsl/model/ContentType.scala     |   18 +-
 .../scala/akka/http/scaladsl/model/DateTime.scala  |   22 +-
 .../scala/akka/http/scaladsl/model/ErrorInfo.scala |   35 +-
 .../scala/akka/http/scaladsl/model/FormData.scala  |    7 +-
 .../akka/http/scaladsl/model/HttpCharset.scala     |    5 +-
 .../akka/http/scaladsl/model/HttpEntity.scala      |  154 +-
 .../akka/http/scaladsl/model/HttpHeader.scala      |    8 +-
 .../akka/http/scaladsl/model/HttpMessage.scala     |  215 +-
 .../akka/http/scaladsl/model/HttpMethod.scala      |   14 +-
 .../akka/http/scaladsl/model/HttpProtocol.scala    |    5 +-
 .../akka/http/scaladsl/model/MediaRange.scala      |   16 +-
 .../scala/akka/http/scaladsl/model/MediaType.scala |   56 +-
 .../scala/akka/http/scaladsl/model/Multipart.scala |  139 +-
 .../akka/http/scaladsl/model/RemoteAddress.scala   |    5 +-
 .../model/RequestResponseAssociation.scala         |    1 -
 .../scala/akka/http/scaladsl/model/Trailer.scala   |    1 +
 .../http/scaladsl/model/TransferEncoding.scala     |    4 +-
 .../main/scala/akka/http/scaladsl/model/Uri.scala  |  110 +-
 .../akka/http/scaladsl/model/WithQValue.scala      |    1 +
 .../http/scaladsl/model/headers/ByteRange.scala    |    8 +
 .../scaladsl/model/headers/CacheDirective.scala    |    5 +-
 .../model/headers/ContentDispositionType.scala     |    2 +-
 .../scaladsl/model/headers/HttpChallenge.scala     |    2 +-
 .../http/scaladsl/model/headers/HttpCookie.scala   |  178 +-
 .../scaladsl/model/headers/HttpCredentials.scala   |    9 +-
 .../http/scaladsl/model/headers/HttpEncoding.scala |    6 +-
 .../scaladsl/model/headers/LanguageRange.scala     |    8 +-
 .../http/scaladsl/model/headers/LinkValue.scala    |   13 +-
 .../scaladsl/model/headers/ProductVersion.scala    |    9 +-
 .../model/headers/WebSocketExtension.scala         |    3 +-
 .../akka/http/scaladsl/model/headers/headers.scala |  124 +-
 .../http/scaladsl/model/http2/Http2Exception.scala |    4 +-
 .../scala/akka/http/scaladsl/model/package.scala   |    1 +
 .../http/scaladsl/model/sse/ServerSentEvent.scala  |   14 +-
 .../akka/http/scaladsl/model/ws/Message.scala      |   29 +-
 .../model/ws/PeerClosedConnectionException.scala   |    3 +-
 .../scaladsl/model/ws/UpgradeToWebSocket.scala     |   29 +-
 .../http/scaladsl/model/ws/WebSocketRequest.scala  |   15 +-
 .../http/scaladsl/model/ws/WebSocketUpgrade.scala  |   29 +-
 .../model/ws/WebSocketUpgradeResponse.scala        |    3 +-
 .../settings/ClientConnectionSettings.scala        |   34 +-
 .../scaladsl/settings/ConnectionPoolSettings.scala |   50 +-
 .../scaladsl/settings/Http2ServerSettings.scala    |   86 +-
 .../scaladsl/settings/HttpsProxySettings.scala     |    4 +-
 .../http/scaladsl/settings/ParserSettings.scala    |   69 +-
 .../http/scaladsl/settings/ServerSettings.scala    |   56 +-
 .../http/scaladsl/settings/WebSocketSettings.scala |    1 +
 .../test/scala/akka/http/HashCodeCollider.scala    |    3 +-
 .../engine/client/ClientCancellationSpec.scala     |   20 +-
 .../client/HighLevelOutgoingConnectionSpec.scala   |   14 +-
 .../engine/client/HostConnectionPoolSpec.scala     |  683 ++++---
 .../impl/engine/client/HttpConfigurationSpec.scala |   52 +-
 .../engine/client/HttpsProxyGraphStageSpec.scala   |    3 +-
 .../client/LowLevelOutgoingConnectionSpec.scala    |  116 +-
 .../impl/engine/client/NewConnectionPoolSpec.scala |  142 +-
 .../impl/engine/client/PrepareResponseSpec.scala   |    7 +-
 .../engine/client/ResponseParsingMergeSpec.scala   |   26 +-
 .../client/TlsEndpointVerificationSpec.scala       |    4 +-
 .../impl/engine/client/pool/SlotStateSpec.scala    |    9 +-
 .../http/impl/engine/parsing/BoyerMooreSpec.scala  |    4 +-
 .../impl/engine/parsing/HttpHeaderParserSpec.scala |  110 +-
 .../engine/parsing/HttpHeaderParserTestBed.scala   |    5 +-
 .../impl/engine/parsing/RequestParserSpec.scala    |  102 +-
 .../impl/engine/parsing/ResponseParserSpec.scala   |   80 +-
 .../engine/rendering/RequestRendererSpec.scala     |   68 +-
 .../engine/rendering/ResponseRendererSpec.scala    |  109 +-
 .../engine/server/HttpServerBug21008Spec.scala     |   70 +-
 .../http/impl/engine/server/HttpServerSpec.scala   |  735 +++----
 .../engine/server/HttpServerTestSetupBase.scala    |   10 +-
 .../HttpServerWithExplicitSchedulerSpec.scala      |   12 +-
 .../impl/engine/server/PrepareRequestsSpec.scala   |   13 +-
 .../akka/http/impl/engine/ws/BitBuilder.scala      |   11 +-
 .../http/impl/engine/ws/ByteStringSinkProbe.scala  |   12 +-
 .../http/impl/engine/ws/EchoTestClientApp.scala    |    4 +-
 .../akka/http/impl/engine/ws/FramingSpec.scala     |    6 +-
 .../akka/http/impl/engine/ws/MessageSpec.scala     |   76 +-
 .../akka/http/impl/engine/ws/Utf8CodingSpecs.scala |   12 +-
 .../http/impl/engine/ws/WSServerAutobahnTest.scala |    4 +-
 .../akka/http/impl/engine/ws/WSTestSetupBase.scala |   54 +-
 .../akka/http/impl/engine/ws/WSTestUtils.scala     |   34 +-
 .../http/impl/engine/ws/WebSocketClientSpec.scala  |   23 +-
 .../impl/engine/ws/WebSocketIntegrationSpec.scala  |   35 +-
 .../http/impl/engine/ws/WebSocketServerSpec.scala  |    3 +-
 .../http/impl/model/parser/HttpHeaderSpec.scala    |  186 +-
 .../http/impl/util/AkkaSpecWithMaterializer.scala  |   17 +-
 .../scala/akka/http/impl/util/BenchUtils.scala     |    1 +
 .../akka/http/impl/util/CollectionStage.scala      |    3 +-
 .../akka/http/impl/util/ExampleHttpContexts.scala  |    5 +-
 .../akka/http/impl/util/One2OneBidiFlowSpec.scala  |   22 +-
 .../scala/akka/http/impl/util/RenderingSpec.scala  |    5 +-
 .../akka/http/impl/util/StreamUtilsSpec.scala      |    4 +-
 .../akka/http/impl/util/WithLogCapturing.scala     |    4 +-
 .../akka/http/javadsl/ConnectionContextSpec.scala  |    3 +-
 .../akka/http/javadsl/HttpExtensionApiSpec.scala   |   20 +-
 .../akka/http/javadsl/model/JavaApiSpec.scala      |    8 +-
 .../http/javadsl/model/JavaApiTestCaseSpecs.scala  |    6 +-
 .../akka/http/javadsl/model/MultipartsSpec.scala   |    3 +-
 .../javadsl/model/headers/HttpCookieSpec.scala     |    3 +-
 .../akka/http/scaladsl/ClientServerSpec.scala      |  131 +-
 .../ClientTransportWithCustomResolverSpec.scala    |    6 +-
 .../http/scaladsl/GracefulTerminationSpec.scala    |   38 +-
 .../test/scala/akka/http/scaladsl/TestClient.scala |   12 +-
 .../http/scaladsl/TightRequestTimeoutSpec.scala    |    5 +-
 .../akka/http/scaladsl/model/DateTimeSpec.scala    |    2 +-
 .../http/scaladsl/model/EntityDiscardingSpec.scala |    3 +-
 .../akka/http/scaladsl/model/HttpEntitySpec.scala  |   85 +-
 .../akka/http/scaladsl/model/HttpMessageSpec.scala |   21 +-
 .../akka/http/scaladsl/model/MultipartSpec.scala   |   17 +-
 .../akka/http/scaladsl/model/TurkishISpec.scala    |    2 +-
 .../scala/akka/http/scaladsl/model/UriSpec.scala   |  223 ++-
 .../http/scaladsl/model/headers/HeaderSpec.scala   |   63 +-
 .../settings/ConnectionPoolSettingsSpec.scala      |   12 +-
 .../src/test/scala/akka/stream/testkit/Utils.scala |    6 +-
 .../src/test/scala/akka/testkit/AkkaSpec.scala     |   12 +-
 .../src/test/scala/akka/testkit/Coroner.scala      |   20 +-
 .../http/HttpModelIntegrationSpec.scala            |    7 +-
 .../sprayjson/SprayJsonByteStringParserInput.scala |    1 -
 .../marshallers/sprayjson/SprayJsonSupport.scala   |   46 +-
 .../sprayjson/SprayJsonSupportSpec.scala           |    3 +-
 .../scaladsl/marshallers/xml/ScalaXmlSupport.scala |    7 +-
 .../akka/http/fix/MigrateToServerBuilder.scala     |   33 +-
 .../akka/http/fix/MigrateToServerBuilderTest.scala |    2 +-
 .../akka/http/javadsl/testkit/JUnitRouteTest.scala |    9 +-
 .../akka/http/javadsl/testkit/RouteTest.scala      |    9 +-
 .../http/javadsl/testkit/TestRouteResult.scala     |   11 +-
 .../scala/akka/http/javadsl/testkit/WSProbe.scala  |    3 +-
 .../javadsl/testkit/WSTestRequestBuilding.scala    |   10 +-
 .../scaladsl/testkit/MarshallingTestUtils.scala    |   13 +-
 .../akka/http/scaladsl/testkit/RouteTest.scala     |   31 +-
 .../testkit/RouteTestResultComponent.scala         |    3 +-
 .../http/scaladsl/testkit/ScalatestUtils.scala     |   14 +-
 .../akka/http/scaladsl/testkit/Specs2Utils.scala   |    6 +-
 .../scala/akka/http/scaladsl/testkit/WSProbe.scala |   17 +-
 .../scaladsl/testkit/WSTestRequestBuilding.scala   |   31 +-
 .../scaladsl/testkit/ScalatestRouteTestSpec.scala  |   20 +-
 .../http/AkkaHttpServerLatencyMultiNodeSpec.scala  |   58 +-
 .../scala/akka/http/STMultiNodeSpec.scala          |    4 +-
 .../akka/remote/testkit/MultiNodeConfig.scala      |   87 +-
 .../akka/remote/testkit/PerfFlamesSupport.scala    |    2 +-
 .../scala/org/scalatest/extra/QuietReporter.scala  |    3 +-
 .../test/scala/akka/http/HashCodeCollider.scala    |    3 +-
 .../http/javadsl/DirectivesConsistencySpec.scala   |   27 +-
 .../akka/http/javadsl/server/HttpAppSpec.scala     |   35 +-
 .../akka/http/scaladsl/CustomMediaTypesSpec.scala  |   13 +-
 .../akka/http/scaladsl/CustomStatusCodesSpec.scala |    7 +-
 .../scala/akka/http/scaladsl/FormDataSpec.scala    |   12 +-
 .../scaladsl/RouteJavaScalaDslConversionSpec.scala |    8 +-
 .../akka/http/scaladsl/TestSingleRequest.scala     |    2 +-
 .../http/scaladsl/coding/CodecSpecSupport.scala    |    6 +-
 .../akka/http/scaladsl/coding/CoderSpec.scala      |    7 +-
 .../akka/http/scaladsl/coding/DecoderSpec.scala    |   27 +-
 .../akka/http/scaladsl/coding/DeflateSpec.scala    |   11 +-
 .../akka/http/scaladsl/coding/EncoderSpec.scala    |    3 +-
 .../scala/akka/http/scaladsl/coding/GzipSpec.scala |    2 +-
 .../akka/http/scaladsl/coding/NoCodingSpec.scala   |    2 +-
 .../scaladsl/marshallers/JsonSupportSpec.scala     |    2 +-
 .../sprayjson/SprayJsonSupportSpec.scala           |    4 +-
 .../marshallers/xml/ScalaXmlSupportSpec.scala      |   14 +-
 .../marshalling/ContentNegotiationSpec.scala       |  191 +-
 .../FromStatusCodeAndXYZMarshallerSpec.scala       |    9 +-
 .../scaladsl/marshalling/MarshallingSpec.scala     |   76 +-
 .../http/scaladsl/server/BasicRouteSpecs.scala     |   39 +-
 .../http/scaladsl/server/ConnectionTestApp.scala   |   10 +-
 .../DiscardEntityDefaultExceptionHandlerSpec.scala |   15 +-
 .../DontLeakActorsOnFailingConnectionSpecs.scala   |    2 +-
 .../http/scaladsl/server/EntityStreamingSpec.scala |   92 +-
 .../akka/http/scaladsl/server/HttpAppSpec.scala    |   11 +-
 .../scaladsl/server/ModeledCustomHeaderSpec.scala  |   12 +-
 .../akka/http/scaladsl/server/RejectionSpec.scala  |   26 +-
 .../akka/http/scaladsl/server/RoutingSpec.scala    |    3 +-
 .../akka/http/scaladsl/server/SizeLimitSpec.scala  |   29 +-
 .../akka/http/scaladsl/server/TcpLeakApp.scala     |    9 +-
 .../directives/AttributeDirectivesSpec.scala       |    1 -
 .../directives/CacheConditionDirectivesSpec.scala  |   36 +-
 .../server/directives/CodingDirectivesSpec.scala   |   38 +-
 .../directives/DebuggingDirectivesSpec.scala       |    9 +-
 .../directives/ExecutionDirectivesSpec.scala       |  148 +-
 .../directives/FileAndResourceDirectivesSpec.scala |   21 +-
 .../directives/FileUploadDirectivesSpec.scala      |   88 +-
 .../directives/FormFieldDirectivesSpec.scala       |   33 +-
 .../server/directives/FutureDirectivesSpec.scala   |   48 +-
 .../server/directives/HeaderDirectivesSpec.scala   |    5 +-
 .../directives/MarshallingDirectivesSpec.scala     |   22 +-
 .../server/directives/MethodDirectivesSpec.scala   |   15 +-
 .../server/directives/MiscDirectivesSpec.scala     |   24 +-
 .../directives/ParameterDirectivesSpec.scala       |   24 +-
 .../server/directives/PathDirectivesSpec.scala     |  473 ++---
 .../server/directives/RangeDirectivesSpec.scala    |    5 +-
 .../server/directives/RouteDirectivesSpec.scala    |   41 +-
 .../server/directives/SchemeDirectivesSpec.scala   |   14 +-
 .../server/directives/SecurityDirectivesSpec.scala |   13 +-
 .../server/directives/TimeoutDirectivesSpec.scala  |   12 +-
 .../directives/WebSocketDirectivesSpec.scala       |   57 +-
 .../http/scaladsl/server/util/TupleOpsSpec.scala   |    6 +-
 .../unmarshalling/MultipartUnmarshallersSpec.scala |  173 +-
 .../scaladsl/unmarshalling/UnmarshallingSpec.scala |   18 +-
 .../sse/EventStreamUnmarshallingSpec.scala         |    3 +-
 .../sse/ServerSentEventParserSpec.scala            |    8 +-
 .../http/impl/settings/RoutingSettingsImpl.scala   |   21 +-
 .../settings/ServerSentEventSettingsImpl.scala     |   10 +-
 .../javadsl/common/EntityStreamingSupport.scala    |    6 +
 .../akka/http/javadsl/marshalling/Marshaller.scala |   41 +-
 .../akka/http/javadsl/server/Directives.scala      |   15 +-
 .../http/javadsl/server/ExceptionHandler.scala     |    2 +
 .../akka/http/javadsl/server/PathMatchers.scala    |   26 +-
 .../http/javadsl/server/RejectionHandler.scala     |   18 +-
 .../akka/http/javadsl/server/Rejections.scala      |   26 +-
 .../akka/http/javadsl/server/RequestContext.scala  |   19 +-
 .../scala/akka/http/javadsl/server/Route.scala     |    7 +-
 .../http/javadsl/server/RoutingJavaMapping.scala   |   36 +-
 .../server/directives/AttributeDirectives.scala    |    2 +-
 .../server/directives/BasicDirectives.scala        |   54 +-
 .../directives/CacheConditionDirectives.scala      |    7 +-
 .../server/directives/CodingDirectives.scala       |    1 +
 .../server/directives/CookieDirectives.scala       |    1 +
 .../server/directives/DebuggingDirectives.scala    |   29 +-
 .../directives/FileAndResourceDirectives.scala     |   33 +-
 .../server/directives/FileUploadDirectives.scala   |   27 +-
 .../server/directives/FormFieldDirectives.scala    |    2 +-
 .../FramedEntityStreamingDirectives.scala          |   11 +-
 .../server/directives/FutureDirectives.scala       |    6 +-
 .../server/directives/HeaderDirectives.scala       |   31 +-
 .../javadsl/server/directives/HostDirectives.scala |    1 +
 .../server/directives/MarshallingDirectives.scala  |   19 +-
 .../server/directives/ParameterDirectives.scala    |   23 +-
 .../javadsl/server/directives/PathDirectives.scala |    3 +-
 .../server/directives/RangeDirectives.scala        |    1 +
 .../javadsl/server/directives/RouteAdapter.scala   |    2 +-
 .../server/directives/RouteDirectives.scala        |   20 +-
 .../server/directives/SchemeDirectives.scala       |    1 +
 .../server/directives/SecurityDirectives.scala     |   65 +-
 .../server/directives/TimeoutDirectives.scala      |   15 +-
 .../server/directives/WebSocketDirectives.scala    |   10 +-
 .../http/javadsl/settings/RoutingSettings.scala    |   23 +-
 .../javadsl/settings/ServerSentEventSettings.scala |    1 -
 .../javadsl/unmarshalling/StringUnmarshaller.scala |   10 +-
 .../http/javadsl/unmarshalling/Unmarshaller.scala  |   52 +-
 .../sse/EventStreamUnmarshalling.scala             |    7 +-
 .../http/scaladsl/client/RequestBuilding.scala     |   14 +-
 .../client/TransformerPipelineSupport.scala        |    4 +-
 .../scala/akka/http/scaladsl/coding/Coders.scala   |   10 +-
 .../akka/http/scaladsl/coding/DataMapper.scala     |    5 +-
 .../scala/akka/http/scaladsl/coding/Decoder.scala  |    4 +-
 .../scala/akka/http/scaladsl/coding/Deflate.scala  |    3 +-
 .../http/scaladsl/coding/DeflateCompressor.scala   |   12 +-
 .../scala/akka/http/scaladsl/coding/Encoder.scala  |    9 +-
 .../scala/akka/http/scaladsl/coding/Gzip.scala     |    3 +-
 .../akka/http/scaladsl/coding/GzipCompressor.scala |    6 +-
 .../common/CsvEntityStreamingSupport.scala         |   16 +-
 .../scaladsl/common/EntityStreamingSupport.scala   |    5 +-
 .../common/JsonEntityStreamingSupport.scala        |   16 +-
 .../akka/http/scaladsl/common/NameReceptacle.scala |   13 +-
 .../akka/http/scaladsl/common/StrictForm.scala     |  105 +-
 .../scaladsl/marshalling/GenericMarshallers.scala  |    5 +-
 .../akka/http/scaladsl/marshalling/Marshal.scala   |   19 +-
 .../http/scaladsl/marshalling/Marshaller.scala     |   49 +-
 .../marshalling/MultipartMarshallers.scala         |    7 +-
 .../PredefinedToEntityMarshallers.scala            |    9 +-
 .../PredefinedToRequestMarshallers.scala           |    9 +-
 .../PredefinedToResponseMarshallers.scala          |   93 +-
 .../marshalling/ToResponseMarshallable.scala       |    2 +-
 .../akka/http/scaladsl/marshalling/package.scala   |    4 +-
 .../http/scaladsl/server/ContentNegotation.scala   |   10 +-
 .../akka/http/scaladsl/server/Directive.scala      |   48 +-
 .../akka/http/scaladsl/server/Directives.scala     |   52 +-
 .../http/scaladsl/server/ExceptionHandler.scala    |   40 +-
 .../scala/akka/http/scaladsl/server/HttpApp.scala  |   11 +-
 .../akka/http/scaladsl/server/PathMatcher.scala    |   47 +-
 .../akka/http/scaladsl/server/Rejection.scala      |   82 +-
 .../http/scaladsl/server/RejectionHandler.scala    |   57 +-
 .../akka/http/scaladsl/server/RequestContext.scala |    9 +-
 .../http/scaladsl/server/RequestContextImpl.scala  |   42 +-
 .../scala/akka/http/scaladsl/server/Route.scala    |   54 +-
 .../http/scaladsl/server/RouteConcatenation.scala  |    1 +
 .../akka/http/scaladsl/server/RouteResult.scala    |   30 +-
 .../akka/http/scaladsl/server/RoutingLog.scala     |    2 +-
 .../server/directives/BasicDirectives.scala        |   34 +-
 .../directives/CacheConditionDirectives.scala      |    7 +-
 .../server/directives/CodingDirectives.scala       |    2 +-
 .../server/directives/DebuggingDirectives.scala    |   12 +-
 .../server/directives/ExecutionDirectives.scala    |   12 +-
 .../directives/FileAndResourceDirectives.scala     |   50 +-
 .../server/directives/FileUploadDirectives.scala   |    4 +-
 .../server/directives/FormFieldDirectives.scala    |  110 +-
 .../FramedEntityStreamingDirectives.scala          |    6 +-
 .../server/directives/HeaderDirectives.scala       |   35 +-
 .../server/directives/HostDirectives.scala         |    6 +-
 .../server/directives/MarshallingDirectives.scala  |   18 +-
 .../server/directives/MethodDirectives.scala       |    8 +-
 .../server/directives/MiscDirectives.scala         |   15 +-
 .../server/directives/ParameterDirectives.scala    |   60 +-
 .../server/directives/PathDirectives.scala         |    6 +-
 .../server/directives/RangeDirectives.scala        |   13 +-
 .../server/directives/RouteDirectives.scala        |   37 +-
 .../server/directives/SecurityDirectives.scala     |   74 +-
 .../server/directives/WebSocketDirectives.scala    |    5 +-
 .../http/scaladsl/server/util/BinaryPolyFunc.scala |    1 -
 .../akka/http/scaladsl/server/util/Tuple.scala     |   28 +-
 .../akka/http/scaladsl/server/util/TupleOps.scala  |    3 +-
 .../http/scaladsl/settings/RoutingSettings.scala   |   30 +-
 .../unmarshalling/GenericUnmarshallers.scala       |   23 +-
 .../unmarshalling/MultipartUnmarshallers.scala     |  132 +-
 .../PredefinedFromEntityUnmarshallers.scala        |   13 +-
 .../PredefinedFromStringUnmarshallers.scala        |    8 +-
 .../http/scaladsl/unmarshalling/Unmarshal.scala    |    1 +
 .../http/scaladsl/unmarshalling/Unmarshaller.scala |   53 +-
 .../akka/http/scaladsl/unmarshalling/package.scala |    4 +-
 .../sse/EventStreamUnmarshalling.scala             |   18 +-
 .../scaladsl/unmarshalling/sse/LineParser.scala    |   10 +-
 .../unmarshalling/sse/ServerSentEventParser.scala  |   16 +-
 .../impl/engine/http2/H2SpecIntegrationSpec.scala  |   22 +-
 .../http/impl/engine/http2/H2cUpgradeSpec.scala    |   18 +-
 .../impl/engine/http2/HPackEncodingSupport.scala   |   17 +-
 .../http/impl/engine/http2/HPackSpecExamples.scala |    9 +-
 .../impl/engine/http2/Http2ClientServerSpec.scala  |   48 +-
 .../http/impl/engine/http2/Http2ClientSpec.scala   |  834 ++++----
 .../impl/engine/http2/Http2FrameHpackSupport.scala |   32 +-
 .../http/impl/engine/http2/Http2FrameProbe.scala   |   36 +-
 .../http/impl/engine/http2/Http2FrameSending.scala |    9 +-
 .../engine/http2/Http2PersistentClientSpec.scala   |  202 +-
 .../impl/engine/http2/Http2ServerDemuxSpec.scala   |   11 +-
 .../http/impl/engine/http2/Http2ServerSpec.scala   | 2118 ++++++++++----------
 .../engine/http2/HttpMessageRenderingSpec.scala    |   41 +-
 .../http/impl/engine/http2/PriorityTreeSpec.scala  |   81 +-
 .../impl/engine/http2/ProtocolSwitchSpec.scala     |   21 +-
 .../impl/engine/http2/RequestParsingSpec.scala     |  174 +-
 .../http/impl/engine/http2/TelemetrySpiSpec.scala  |   84 +-
 .../http/impl/engine/http2/WindowTracking.scala    |    3 +-
 .../impl/engine/http2/WithPriorKnowledgeSpec.scala |    9 +-
 .../engine/http2/framing/Http2FramingSpec.scala    |   29 +-
 .../akka/http/impl/engine/http2/package.scala      |    7 +-
 .../scala/akka/http/scaladsl/Http2ServerTest.scala |   12 +-
 .../main/scala/akka/parboiled2/CharPredicate.scala |   26 +-
 .../src/main/scala/akka/parboiled2/CharUtils.scala |    5 +-
 .../akka/parboiled2/DynamicRuleDispatch.scala      |    6 +-
 .../scala/akka/parboiled2/ErrorFormatter.scala     |   26 +-
 .../main/scala/akka/parboiled2/ParseError.scala    |   10 +-
 .../src/main/scala/akka/parboiled2/Parser.scala    |   59 +-
 .../main/scala/akka/parboiled2/ParserInput.scala   |   10 +-
 .../src/main/scala/akka/parboiled2/Rule.scala      |   27 +-
 .../scala/akka/parboiled2/RuleDSLActions.scala     |    3 +-
 .../scala/akka/parboiled2/RuleDSLCombinators.scala |   13 +-
 .../akka/parboiled2/support/ActionOpsSupport.scala |   32 +-
 .../akka/parboiled2/support/OpTreeContext.scala    |  128 +-
 .../scala/akka/parboiled2/support/RunResult.scala  |   17 +-
 .../scala/akka/parboiled2/support/Unpack.scala     |    2 +-
 .../src/main/scala/akka/shapeless/ops/hlists.scala |   10 +-
 build.sbt                                          |  136 +-
 .../scala/docs/ApiMayChangeDocCheckerSpec.scala    |    4 +-
 docs/src/test/scala/docs/CompileOnlySpec.scala     |    1 +
 .../scala/docs/http/scaladsl/Http2ClientApp.scala  |   29 +-
 .../test/scala/docs/http/scaladsl/Http2Spec.scala  |   23 +-
 .../scaladsl/HttpClientDecodingExampleSpec.scala   |    4 +-
 .../docs/http/scaladsl/HttpClientExampleSpec.scala |   72 +-
 .../HttpRequestDetailedStringExampleSpec.scala     |    6 +-
 .../HttpResponseDetailedStringExampleSpec.scala    |    6 +-
 .../docs/http/scaladsl/HttpServerExampleSpec.scala |   69 +-
 .../docs/http/scaladsl/HttpServerHighLevel.scala   |    3 +-
 .../HttpServerStreamingRandomNumbers.scala         |    6 +-
 .../scaladsl/HttpServerWithActorInteraction.scala  |    3 +-
 .../http/scaladsl/HttpServerWithActorsSample.scala |   28 +-
 .../docs/http/scaladsl/HttpsExamplesSpec.scala     |    4 +-
 .../scala/docs/http/scaladsl/MarshalSpec.scala     |    4 +-
 .../test/scala/docs/http/scaladsl/ModelSpec.scala  |   33 +-
 .../docs/http/scaladsl/RouteSealExampleSpec.scala  |    7 +-
 .../scaladsl/ServerSentEventsExampleSpec.scala     |    8 +-
 .../docs/http/scaladsl/SprayJsonExample.scala      |    3 +-
 .../docs/http/scaladsl/SprayJsonExampleSpec.scala  |    7 +-
 .../http/scaladsl/SprayJsonPrettyMarshalSpec.scala |   12 +-
 .../scala/docs/http/scaladsl/UnmarshalSpec.scala   |    4 +-
 .../http/scaladsl/WebSocketClientExampleSpec.scala |   30 +-
 .../server/AkkaHttp1020MigrationSpec.scala         |    8 +-
 .../server/BlockingInHttpExamplesSpec.scala        |   10 +-
 .../server/CaseClassExtractionExamplesSpec.scala   |   22 +-
 .../scaladsl/server/DirectiveExamplesSpec.scala    |   48 +-
 .../scaladsl/server/FileUploadExamplesSpec.scala   |   13 +-
 .../server/FullSpecs2TestKitExampleSpec.scala      |    3 +-
 .../scaladsl/server/FullTestKitExampleSpec.scala   |    3 +-
 .../scaladsl/server/HttpsServerExampleSpec.scala   |   18 +-
 .../server/RejectionHandlerExamplesSpec.scala      |   31 +-
 .../server/ServerShutdownExampleSpec.scala         |    2 +-
 .../scaladsl/server/WebSocketExampleSpec.scala     |   30 +-
 .../AttributeDirectivesExamplesSpec.scala          |   20 +-
 .../directives/BasicDirectivesExamplesSpec.scala   |  187 +-
 .../directives/CachingDirectivesExamplesSpec.scala |   24 +-
 .../directives/CodingDirectivesExamplesSpec.scala  |   37 +-
 .../directives/CookieDirectivesExamplesSpec.scala  |   21 +-
 .../directives/CustomDirectivesExamplesSpec.scala  |   28 +-
 .../server/directives/CustomHttpMethodSpec.scala   |   11 +-
 .../DebuggingDirectivesExamplesSpec.scala          |   32 +-
 .../ExecutionDirectivesExamplesSpec.scala          |   10 +-
 .../FileAndResourceDirectivesExamplesSpec.scala    |   31 +-
 .../FileUploadDirectivesExamplesSpec.scala         |   17 +-
 .../FormFieldDirectivesExamplesSpec.scala          |   23 +-
 .../directives/FutureDirectivesExamplesSpec.scala  |   25 +-
 .../directives/HeaderDirectivesExamplesSpec.scala  |   76 +-
 .../directives/HostDirectivesExamplesSpec.scala    |   25 +-
 .../directives/JsonStreamingExamplesSpec.scala     |   81 +-
 .../directives/JsonStreamingFullExamples.scala     |   13 +-
 .../MarshallingDirectivesExamplesSpec.scala        |   49 +-
 .../directives/MethodDirectivesExamplesSpec.scala  |   46 +-
 .../directives/MiscDirectivesExamplesSpec.scala    |   45 +-
 .../ParameterDirectivesExamplesSpec.scala          |   50 +-
 .../directives/PathDirectivesExamplesSpec.scala    |  110 +-
 .../directives/RangeDirectivesExamplesSpec.scala   |    6 +-
 .../RespondWithDirectivesExamplesSpec.scala        |   16 +-
 .../directives/RouteDirectivesExamplesSpec.scala   |   41 +-
 .../directives/SchemeDirectivesExamplesSpec.scala  |   11 +-
 .../SecurityDirectivesExamplesSpec.scala           |  170 +-
 .../server/directives/StyleGuideExamplesSpec.scala |   42 +-
 .../directives/TimeoutDirectivesExamplesSpec.scala |   37 +-
 .../WebSocketDirectivesExamplesSpec.scala          |   70 +-
 project/AkkaDependency.scala                       |   25 +-
 project/AutomaticModuleName.scala                  |    7 +-
 project/Common.scala                               |    7 +-
 project/CopyrightHeader.scala                      |   12 +-
 project/Dependencies.scala                         |   64 +-
 project/Doc.scala                                  |   56 +-
 project/Formatting.scala                           |   36 -
 project/MiMa.scala                                 |    8 +-
 project/MultiNode.scala                            |   29 +-
 project/ParadoxSupport.scala                       |   47 +-
 project/Publish.scala                              |   10 +-
 project/SbtInternalAccess.scala                    |   10 +-
 project/ValidatePullRequest.scala                  |  192 +-
 project/VersionGenerator.scala                     |    3 +-
 project/plugins.sbt                                |    2 +-
 600 files changed, 13691 insertions(+), 11453 deletions(-)
 create mode 100644 .gitattributes
 create mode 100644 .scalafmt.conf
 delete mode 100644 project/Formatting.scala


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org


[incubator-pekko-http] 03/04: format source with scalafmt, #8

Posted by md...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

mdedetrich pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-http.git

commit 90f8bf7e6d6f7a2d6c63313944613a95ff60091a
Author: Auto Format <nobody>
AuthorDate: Mon Nov 14 10:46:23 2022 +0100

    format source with scalafmt, #8
---
 .../src/main/scala/akka/BenchRunner.scala          |    3 +-
 .../src/main/scala/akka/http/CommonBenchmark.scala |   24 +-
 .../http/impl/engine/ConnectionPoolBenchmark.scala |    9 +-
 .../http/impl/engine/HeaderParserBenchmark.scala   |    3 +-
 .../http/impl/engine/HttpEntityBenchmark.scala     |    3 +-
 .../impl/engine/ServerProcessingBenchmark.scala    |    6 +-
 .../engine/StreamServerProcessingBenchmark.scala   |   21 +-
 .../engine/http2/H2ClientServerBenchmark.scala     |   11 +-
 .../engine/http2/H2RequestResponseBenchmark.scala  |   18 +-
 .../engine/http2/H2ServerProcessingBenchmark.scala |    8 +-
 .../akka/http/impl/engine/ws/MaskingBench.scala    |    2 +-
 .../impl/model/parser/UriParserBenchmark.scala     |    3 +-
 .../unmarshalling/sse/LineParserBenchmark.scala    |    3 +-
 .../main/scala/akka/http/caching/LfuCache.scala    |   19 +-
 .../impl/settings/CachingSettingsImpl.scala        |    5 +-
 .../impl/settings/LfuCachingSettingsImpl.scala     |   16 +-
 .../scala/akka/http/caching/scaladsl/Cache.scala   |   11 +-
 .../http/caching/scaladsl/LfuCacheSettings.scala   |    3 +-
 .../server/directives/CachingDirectives.scala      |   27 +-
 .../server/directives/CachingDirectives.scala      |    8 +-
 .../akka/http/caching/ExpiringLfuCacheSpec.scala   |   49 +-
 .../server/directives/CachingDirectivesSpec.scala  |    4 +-
 .../scaladsl/HostConnectionPoolCompatSpec.scala    |    2 +-
 .../scala-2.13+/akka/http/ccompat/package.scala    |    4 +-
 .../akka/http/scaladsl/util/FastFuture.scala       |   42 +-
 .../scala-2.13-/akka/http/ccompat/package.scala    |   10 +-
 .../akka/http/scaladsl/util/FastFuture.scala       |   48 +-
 .../main/scala/akka/http/ParsingErrorHandler.scala |    5 +-
 .../engine/HttpConnectionIdleTimeoutBidi.scala     |   20 +-
 .../impl/engine/client/HttpsProxyGraphStage.scala  |  214 +-
 .../client/OutgoingConnectionBlueprint.scala       |  269 +--
 .../http/impl/engine/client/PoolInterface.scala    |   28 +-
 .../http/impl/engine/client/PoolMasterActor.scala  |   47 +-
 .../engine/client/pool/NewHostConnectionPool.scala |   71 +-
 .../http/impl/engine/client/pool/SlotState.scala   |   70 +-
 .../akka/http/impl/engine/http2/ByteFlag.scala     |    1 +
 .../akka/http/impl/engine/http2/FrameEvent.scala   |   68 +-
 .../akka/http/impl/engine/http2/FrameLogger.scala  |    9 +-
 .../scala/akka/http/impl/engine/http2/Http2.scala  |  113 +-
 .../http/impl/engine/http2/Http2AlpnSupport.scala  |   36 +-
 .../http/impl/engine/http2/Http2Blueprint.scala    |  100 +-
 .../http/impl/engine/http2/Http2Compliance.scala   |   30 +-
 .../akka/http/impl/engine/http2/Http2Demux.scala   |  235 ++-
 .../http/impl/engine/http2/Http2Multiplexer.scala  |    8 +-
 .../http/impl/engine/http2/Http2Protocol.scala     |   12 +-
 .../impl/engine/http2/Http2StreamHandling.scala    |  160 +-
 .../impl/engine/http2/HttpMessageRendering.scala   |   64 +-
 .../impl/engine/http2/IncomingFlowController.scala |    8 +-
 .../http2/OutgoingConnectionBuilderImpl.scala      |   72 +-
 .../akka/http/impl/engine/http2/PriorityTree.scala |   24 +-
 .../http/impl/engine/http2/ProtocolSwitch.scala    |   67 +-
 .../http/impl/engine/http2/RequestParsing.scala    |   82 +-
 .../http/impl/engine/http2/StreamPrioritizer.scala |   11 +-
 .../akka/http/impl/engine/http2/TelemetrySpi.scala |   12 +-
 .../engine/http2/client/PersistentConnection.scala |  323 +--
 .../impl/engine/http2/client/ResponseParsing.scala |   38 +-
 .../impl/engine/http2/framing/FrameRenderer.scala  |   40 +-
 .../engine/http2/framing/Http2FrameParsing.scala   |   27 +-
 .../engine/http2/hpack/HeaderCompression.scala     |  105 +-
 .../engine/http2/hpack/HeaderDecompression.scala   |  159 +-
 .../engine/http2/hpack/Http2HeaderParsing.scala    |    4 +-
 .../impl/engine/http2/util/AsciiTreeLayout.scala   |   29 +-
 .../http/impl/engine/parsing/BodyPartParser.scala  |   73 +-
 .../akka/http/impl/engine/parsing/BoyerMoore.scala |    4 +-
 .../impl/engine/parsing/HttpHeaderParser.scala     |  184 +-
 .../impl/engine/parsing/HttpMessageParser.scala    |  142 +-
 .../impl/engine/parsing/HttpRequestParser.scala    |  366 ++--
 .../impl/engine/parsing/HttpResponseParser.scala   |   66 +-
 .../http/impl/engine/parsing/ParserOutput.scala    |   33 +-
 .../parsing/SpecializedHeaderValueParsers.scala    |    3 +-
 .../akka/http/impl/engine/parsing/package.scala    |   13 +-
 .../impl/engine/rendering/BodyPartRenderer.scala   |   20 +-
 .../engine/rendering/DateHeaderRendering.scala     |   18 +-
 .../rendering/HttpRequestRendererFactory.scala     |  131 +-
 .../rendering/HttpResponseRendererFactory.scala    |  137 +-
 .../http/impl/engine/rendering/RenderSupport.scala |   14 +-
 .../impl/engine/server/HttpServerBluePrint.scala   |  894 +++++----
 .../http/impl/engine/server/ServerTerminator.scala |  191 +-
 .../UpgradeToOtherProtocolResponseHeader.scala     |    2 +-
 .../akka/http/impl/engine/ws/FrameEvent.scala      |   36 +-
 .../http/impl/engine/ws/FrameEventParser.scala     |   10 +-
 .../http/impl/engine/ws/FrameEventRenderer.scala   |   15 +-
 .../akka/http/impl/engine/ws/FrameHandler.scala    |   25 +-
 .../akka/http/impl/engine/ws/FrameLogger.scala     |   11 +-
 .../akka/http/impl/engine/ws/FrameOutHandler.scala |   32 +-
 .../scala/akka/http/impl/engine/ws/Handshake.scala |   69 +-
 .../scala/akka/http/impl/engine/ws/Masking.scala   |   78 +-
 .../impl/engine/ws/MessageToFrameRenderer.scala    |    2 +-
 .../scala/akka/http/impl/engine/ws/Protocol.scala  |   20 +-
 .../scala/akka/http/impl/engine/ws/Randoms.scala   |    1 +
 .../engine/ws/UpgradeToWebSocketLowLevel.scala     |    7 +-
 .../akka/http/impl/engine/ws/Utf8Decoder.scala     |    8 +-
 .../akka/http/impl/engine/ws/Utf8Encoder.scala     |  112 +-
 .../scala/akka/http/impl/engine/ws/WebSocket.scala |  134 +-
 .../impl/engine/ws/WebSocketClientBlueprint.scala  |   70 +-
 .../scala/akka/http/impl/model/JavaQuery.scala     |    7 +-
 .../akka/http/impl/model/UriJavaAccessor.scala     |    1 +
 .../impl/model/parser/AcceptCharsetHeader.scala    |    2 +-
 .../impl/model/parser/AcceptEncodingHeader.scala   |    4 +-
 .../akka/http/impl/model/parser/AcceptHeader.scala |    5 +-
 .../impl/model/parser/AcceptLanguageHeader.scala   |    2 +-
 .../http/impl/model/parser/Base64Parsing.scala     |    1 +
 .../impl/model/parser/CacheControlHeader.scala     |   26 +-
 .../http/impl/model/parser/CharacterClasses.scala  |    7 +-
 .../http/impl/model/parser/CommonActions.scala     |   30 +-
 .../akka/http/impl/model/parser/CommonRules.scala  |   68 +-
 .../model/parser/ContentDispositionHeader.scala    |   16 +-
 .../http/impl/model/parser/ContentTypeHeader.scala |   10 +-
 .../akka/http/impl/model/parser/HeaderParser.scala |   56 +-
 .../http/impl/model/parser/IpAddressParsing.scala  |   16 +-
 .../akka/http/impl/model/parser/LinkHeader.scala   |   52 +-
 .../http/impl/model/parser/SimpleHeaders.scala     |   23 +-
 .../akka/http/impl/model/parser/UriParser.scala    |   70 +-
 .../http/impl/model/parser/WebSocketHeaders.scala  |    6 +-
 .../settings/ClientConnectionSettingsImpl.scala    |   34 +-
 .../impl/settings/ConnectionPoolSettingsImpl.scala |   73 +-
 .../http/impl/settings/ConnectionPoolSetup.scala   |    6 +-
 .../impl/settings/HostConnectionPoolSetup.scala    |    1 -
 .../impl/settings/HttpsProxySettingsImpl.scala     |    8 +-
 .../http/impl/settings/ParserSettingsImpl.scala    |   77 +-
 .../impl/settings/PreviewServerSettingsImpl.scala  |    6 +-
 .../http/impl/settings/ServerSettingsImpl.scala    |   73 +-
 .../http/impl/settings/WebSocketSettingsImpl.scala |   16 +-
 .../http/impl/util/ByteStringParserInput.scala     |    3 +-
 .../scala/akka/http/impl/util/EnhancedConfig.scala |    2 +-
 .../scala/akka/http/impl/util/EnhancedString.scala |    2 +-
 .../scala/akka/http/impl/util/JavaAccessors.scala  |    1 +
 .../scala/akka/http/impl/util/JavaMapping.scala    |  102 +-
 .../akka/http/impl/util/LogByteStringTools.scala   |   48 +-
 .../akka/http/impl/util/One2OneBidiFlow.scala      |   90 +-
 .../main/scala/akka/http/impl/util/Rendering.scala |   27 +-
 .../http/impl/util/SettingsCompanionImpl.scala     |   16 +-
 .../akka/http/impl/util/SocketOptionSettings.scala |   14 +-
 .../http/impl/util/StageLoggingWithOverride.scala  |    1 +
 .../scala/akka/http/impl/util/StreamUtils.scala    |  240 +--
 .../main/scala/akka/http/impl/util/package.scala   |   70 +-
 .../scala/akka/http/javadsl/ClientTransport.scala  |   10 +-
 .../main/scala/akka/http/javadsl/ConnectHttp.scala |    5 +-
 .../akka/http/javadsl/ConnectionContext.scala      |   39 +-
 .../src/main/scala/akka/http/javadsl/Http.scala    |  267 +--
 .../akka/http/javadsl/IncomingConnection.scala     |   13 +-
 .../akka/http/javadsl/OutgoingConnection.scala     |    1 +
 .../http/javadsl/OutgoingConnectionBuilder.scala   |    1 +
 .../scala/akka/http/javadsl/ServerBinding.scala    |    4 +-
 .../scala/akka/http/javadsl/ServerBuilder.scala    |   16 +-
 .../akka/http/javadsl/model/ContentType.scala      |    2 +
 .../scala/akka/http/javadsl/model/MediaType.scala  |   11 +-
 .../javadsl/model/RequestResponseAssociation.scala |    3 +-
 .../scala/akka/http/javadsl/model/Trailer.scala    |    1 +
 .../scala/akka/http/javadsl/model/ws/Message.scala |   16 +-
 .../http/javadsl/model/ws/UpgradeToWebSocket.scala |    7 +-
 .../akka/http/javadsl/model/ws/WebSocket.scala     |    1 +
 .../http/javadsl/model/ws/WebSocketRequest.scala   |    3 +-
 .../http/javadsl/model/ws/WebSocketUpgrade.scala   |    7 +-
 .../model/ws/WebSocketUpgradeResponse.scala        |    3 +-
 .../settings/ClientConnectionSettings.scala        |   25 +-
 .../javadsl/settings/ConnectionPoolSettings.scala  |   14 +-
 .../javadsl/settings/Http2ClientSettings.scala     |   15 +-
 .../http/javadsl/settings/ParserSettings.scala     |   26 +-
 .../javadsl/settings/PreviewServerSettings.scala   |    1 +
 .../http/javadsl/settings/ServerSettings.scala     |   31 +-
 .../http/javadsl/settings/WebSocketSettings.scala  |    1 +
 .../scala/akka/http/scaladsl/ClientTransport.scala |   36 +-
 .../akka/http/scaladsl/ConnectionContext.scala     |   91 +-
 .../src/main/scala/akka/http/scaladsl/Http.scala   |  331 +--
 .../http/scaladsl/OutgoingConnectionBuilder.scala  |    1 +
 .../scala/akka/http/scaladsl/ServerBuilder.scala   |   19 +-
 .../akka/http/scaladsl/model/AttributeKeys.scala   |    1 -
 .../akka/http/scaladsl/model/ContentRange.scala    |    7 +-
 .../akka/http/scaladsl/model/ContentType.scala     |   18 +-
 .../scala/akka/http/scaladsl/model/DateTime.scala  |   22 +-
 .../scala/akka/http/scaladsl/model/ErrorInfo.scala |   35 +-
 .../scala/akka/http/scaladsl/model/FormData.scala  |    7 +-
 .../akka/http/scaladsl/model/HttpCharset.scala     |    5 +-
 .../akka/http/scaladsl/model/HttpEntity.scala      |  154 +-
 .../akka/http/scaladsl/model/HttpHeader.scala      |    8 +-
 .../akka/http/scaladsl/model/HttpMessage.scala     |  215 +-
 .../akka/http/scaladsl/model/HttpMethod.scala      |   14 +-
 .../akka/http/scaladsl/model/HttpProtocol.scala    |    5 +-
 .../akka/http/scaladsl/model/MediaRange.scala      |   16 +-
 .../scala/akka/http/scaladsl/model/MediaType.scala |   56 +-
 .../scala/akka/http/scaladsl/model/Multipart.scala |  139 +-
 .../akka/http/scaladsl/model/RemoteAddress.scala   |    5 +-
 .../model/RequestResponseAssociation.scala         |    1 -
 .../scala/akka/http/scaladsl/model/Trailer.scala   |    1 +
 .../http/scaladsl/model/TransferEncoding.scala     |    4 +-
 .../main/scala/akka/http/scaladsl/model/Uri.scala  |  110 +-
 .../akka/http/scaladsl/model/WithQValue.scala      |    1 +
 .../http/scaladsl/model/headers/ByteRange.scala    |    8 +
 .../scaladsl/model/headers/CacheDirective.scala    |    5 +-
 .../model/headers/ContentDispositionType.scala     |    2 +-
 .../scaladsl/model/headers/HttpChallenge.scala     |    2 +-
 .../http/scaladsl/model/headers/HttpCookie.scala   |  178 +-
 .../scaladsl/model/headers/HttpCredentials.scala   |    9 +-
 .../http/scaladsl/model/headers/HttpEncoding.scala |    6 +-
 .../scaladsl/model/headers/LanguageRange.scala     |    8 +-
 .../http/scaladsl/model/headers/LinkValue.scala    |   13 +-
 .../scaladsl/model/headers/ProductVersion.scala    |    9 +-
 .../model/headers/WebSocketExtension.scala         |    3 +-
 .../akka/http/scaladsl/model/headers/headers.scala |  124 +-
 .../http/scaladsl/model/http2/Http2Exception.scala |    4 +-
 .../scala/akka/http/scaladsl/model/package.scala   |    1 +
 .../http/scaladsl/model/sse/ServerSentEvent.scala  |   14 +-
 .../akka/http/scaladsl/model/ws/Message.scala      |   29 +-
 .../model/ws/PeerClosedConnectionException.scala   |    3 +-
 .../scaladsl/model/ws/UpgradeToWebSocket.scala     |   29 +-
 .../http/scaladsl/model/ws/WebSocketRequest.scala  |   15 +-
 .../http/scaladsl/model/ws/WebSocketUpgrade.scala  |   29 +-
 .../model/ws/WebSocketUpgradeResponse.scala        |    3 +-
 .../settings/ClientConnectionSettings.scala        |   34 +-
 .../scaladsl/settings/ConnectionPoolSettings.scala |   50 +-
 .../scaladsl/settings/Http2ServerSettings.scala    |   86 +-
 .../scaladsl/settings/HttpsProxySettings.scala     |    4 +-
 .../http/scaladsl/settings/ParserSettings.scala    |   69 +-
 .../http/scaladsl/settings/ServerSettings.scala    |   56 +-
 .../http/scaladsl/settings/WebSocketSettings.scala |    1 +
 .../test/scala/akka/http/HashCodeCollider.scala    |    3 +-
 .../engine/client/ClientCancellationSpec.scala     |   20 +-
 .../client/HighLevelOutgoingConnectionSpec.scala   |   14 +-
 .../engine/client/HostConnectionPoolSpec.scala     |  683 ++++---
 .../impl/engine/client/HttpConfigurationSpec.scala |   52 +-
 .../engine/client/HttpsProxyGraphStageSpec.scala   |    3 +-
 .../client/LowLevelOutgoingConnectionSpec.scala    |  116 +-
 .../impl/engine/client/NewConnectionPoolSpec.scala |  142 +-
 .../impl/engine/client/PrepareResponseSpec.scala   |    7 +-
 .../engine/client/ResponseParsingMergeSpec.scala   |   26 +-
 .../client/TlsEndpointVerificationSpec.scala       |    4 +-
 .../impl/engine/client/pool/SlotStateSpec.scala    |    9 +-
 .../http/impl/engine/parsing/BoyerMooreSpec.scala  |    4 +-
 .../impl/engine/parsing/HttpHeaderParserSpec.scala |  110 +-
 .../engine/parsing/HttpHeaderParserTestBed.scala   |    5 +-
 .../impl/engine/parsing/RequestParserSpec.scala    |  102 +-
 .../impl/engine/parsing/ResponseParserSpec.scala   |   80 +-
 .../engine/rendering/RequestRendererSpec.scala     |   68 +-
 .../engine/rendering/ResponseRendererSpec.scala    |  109 +-
 .../engine/server/HttpServerBug21008Spec.scala     |   70 +-
 .../http/impl/engine/server/HttpServerSpec.scala   |  735 +++----
 .../engine/server/HttpServerTestSetupBase.scala    |   10 +-
 .../HttpServerWithExplicitSchedulerSpec.scala      |   12 +-
 .../impl/engine/server/PrepareRequestsSpec.scala   |   13 +-
 .../akka/http/impl/engine/ws/BitBuilder.scala      |   11 +-
 .../http/impl/engine/ws/ByteStringSinkProbe.scala  |   12 +-
 .../http/impl/engine/ws/EchoTestClientApp.scala    |    4 +-
 .../akka/http/impl/engine/ws/FramingSpec.scala     |    6 +-
 .../akka/http/impl/engine/ws/MessageSpec.scala     |   76 +-
 .../akka/http/impl/engine/ws/Utf8CodingSpecs.scala |   12 +-
 .../http/impl/engine/ws/WSServerAutobahnTest.scala |    4 +-
 .../akka/http/impl/engine/ws/WSTestSetupBase.scala |   54 +-
 .../akka/http/impl/engine/ws/WSTestUtils.scala     |   34 +-
 .../http/impl/engine/ws/WebSocketClientSpec.scala  |   23 +-
 .../impl/engine/ws/WebSocketIntegrationSpec.scala  |   35 +-
 .../http/impl/engine/ws/WebSocketServerSpec.scala  |    3 +-
 .../http/impl/model/parser/HttpHeaderSpec.scala    |  186 +-
 .../http/impl/util/AkkaSpecWithMaterializer.scala  |   17 +-
 .../scala/akka/http/impl/util/BenchUtils.scala     |    1 +
 .../akka/http/impl/util/CollectionStage.scala      |    3 +-
 .../akka/http/impl/util/ExampleHttpContexts.scala  |    5 +-
 .../akka/http/impl/util/One2OneBidiFlowSpec.scala  |   22 +-
 .../scala/akka/http/impl/util/RenderingSpec.scala  |    5 +-
 .../akka/http/impl/util/StreamUtilsSpec.scala      |    4 +-
 .../akka/http/impl/util/WithLogCapturing.scala     |    4 +-
 .../akka/http/javadsl/ConnectionContextSpec.scala  |    3 +-
 .../akka/http/javadsl/HttpExtensionApiSpec.scala   |   20 +-
 .../akka/http/javadsl/model/JavaApiSpec.scala      |    8 +-
 .../http/javadsl/model/JavaApiTestCaseSpecs.scala  |    6 +-
 .../akka/http/javadsl/model/MultipartsSpec.scala   |    3 +-
 .../javadsl/model/headers/HttpCookieSpec.scala     |    3 +-
 .../akka/http/scaladsl/ClientServerSpec.scala      |  131 +-
 .../ClientTransportWithCustomResolverSpec.scala    |    6 +-
 .../http/scaladsl/GracefulTerminationSpec.scala    |   38 +-
 .../test/scala/akka/http/scaladsl/TestClient.scala |   12 +-
 .../http/scaladsl/TightRequestTimeoutSpec.scala    |    5 +-
 .../akka/http/scaladsl/model/DateTimeSpec.scala    |    2 +-
 .../http/scaladsl/model/EntityDiscardingSpec.scala |    3 +-
 .../akka/http/scaladsl/model/HttpEntitySpec.scala  |   85 +-
 .../akka/http/scaladsl/model/HttpMessageSpec.scala |   21 +-
 .../akka/http/scaladsl/model/MultipartSpec.scala   |   17 +-
 .../akka/http/scaladsl/model/TurkishISpec.scala    |    2 +-
 .../scala/akka/http/scaladsl/model/UriSpec.scala   |  223 ++-
 .../http/scaladsl/model/headers/HeaderSpec.scala   |   63 +-
 .../settings/ConnectionPoolSettingsSpec.scala      |   12 +-
 .../src/test/scala/akka/stream/testkit/Utils.scala |    6 +-
 .../src/test/scala/akka/testkit/AkkaSpec.scala     |   12 +-
 .../src/test/scala/akka/testkit/Coroner.scala      |   20 +-
 .../http/HttpModelIntegrationSpec.scala            |    7 +-
 .../sprayjson/SprayJsonByteStringParserInput.scala |    1 -
 .../marshallers/sprayjson/SprayJsonSupport.scala   |   46 +-
 .../sprayjson/SprayJsonSupportSpec.scala           |    3 +-
 .../scaladsl/marshallers/xml/ScalaXmlSupport.scala |    7 +-
 .../akka/http/fix/MigrateToServerBuilder.scala     |   33 +-
 .../akka/http/fix/MigrateToServerBuilderTest.scala |    2 +-
 .../akka/http/javadsl/testkit/JUnitRouteTest.scala |    9 +-
 .../akka/http/javadsl/testkit/RouteTest.scala      |    9 +-
 .../http/javadsl/testkit/TestRouteResult.scala     |   11 +-
 .../scala/akka/http/javadsl/testkit/WSProbe.scala  |    3 +-
 .../javadsl/testkit/WSTestRequestBuilding.scala    |   10 +-
 .../scaladsl/testkit/MarshallingTestUtils.scala    |   13 +-
 .../akka/http/scaladsl/testkit/RouteTest.scala     |   31 +-
 .../testkit/RouteTestResultComponent.scala         |    3 +-
 .../http/scaladsl/testkit/ScalatestUtils.scala     |   14 +-
 .../akka/http/scaladsl/testkit/Specs2Utils.scala   |    6 +-
 .../scala/akka/http/scaladsl/testkit/WSProbe.scala |   17 +-
 .../scaladsl/testkit/WSTestRequestBuilding.scala   |   31 +-
 .../scaladsl/testkit/ScalatestRouteTestSpec.scala  |   20 +-
 .../http/AkkaHttpServerLatencyMultiNodeSpec.scala  |   58 +-
 .../scala/akka/http/STMultiNodeSpec.scala          |    4 +-
 .../akka/remote/testkit/MultiNodeConfig.scala      |   87 +-
 .../akka/remote/testkit/PerfFlamesSupport.scala    |    2 +-
 .../scala/org/scalatest/extra/QuietReporter.scala  |    3 +-
 .../test/scala/akka/http/HashCodeCollider.scala    |    3 +-
 .../http/javadsl/DirectivesConsistencySpec.scala   |   27 +-
 .../akka/http/javadsl/server/HttpAppSpec.scala     |   35 +-
 .../akka/http/scaladsl/CustomMediaTypesSpec.scala  |   13 +-
 .../akka/http/scaladsl/CustomStatusCodesSpec.scala |    7 +-
 .../scala/akka/http/scaladsl/FormDataSpec.scala    |   12 +-
 .../scaladsl/RouteJavaScalaDslConversionSpec.scala |    8 +-
 .../akka/http/scaladsl/TestSingleRequest.scala     |    2 +-
 .../http/scaladsl/coding/CodecSpecSupport.scala    |    6 +-
 .../akka/http/scaladsl/coding/CoderSpec.scala      |    7 +-
 .../akka/http/scaladsl/coding/DecoderSpec.scala    |   27 +-
 .../akka/http/scaladsl/coding/DeflateSpec.scala    |   11 +-
 .../akka/http/scaladsl/coding/EncoderSpec.scala    |    3 +-
 .../scala/akka/http/scaladsl/coding/GzipSpec.scala |    2 +-
 .../akka/http/scaladsl/coding/NoCodingSpec.scala   |    2 +-
 .../scaladsl/marshallers/JsonSupportSpec.scala     |    2 +-
 .../sprayjson/SprayJsonSupportSpec.scala           |    4 +-
 .../marshallers/xml/ScalaXmlSupportSpec.scala      |   14 +-
 .../marshalling/ContentNegotiationSpec.scala       |  191 +-
 .../FromStatusCodeAndXYZMarshallerSpec.scala       |    9 +-
 .../scaladsl/marshalling/MarshallingSpec.scala     |   76 +-
 .../http/scaladsl/server/BasicRouteSpecs.scala     |   39 +-
 .../http/scaladsl/server/ConnectionTestApp.scala   |   10 +-
 .../DiscardEntityDefaultExceptionHandlerSpec.scala |   15 +-
 .../DontLeakActorsOnFailingConnectionSpecs.scala   |    2 +-
 .../http/scaladsl/server/EntityStreamingSpec.scala |   92 +-
 .../akka/http/scaladsl/server/HttpAppSpec.scala    |   11 +-
 .../scaladsl/server/ModeledCustomHeaderSpec.scala  |   12 +-
 .../akka/http/scaladsl/server/RejectionSpec.scala  |   26 +-
 .../akka/http/scaladsl/server/RoutingSpec.scala    |    3 +-
 .../akka/http/scaladsl/server/SizeLimitSpec.scala  |   29 +-
 .../akka/http/scaladsl/server/TcpLeakApp.scala     |    9 +-
 .../directives/AttributeDirectivesSpec.scala       |    1 -
 .../directives/CacheConditionDirectivesSpec.scala  |   36 +-
 .../server/directives/CodingDirectivesSpec.scala   |   38 +-
 .../directives/DebuggingDirectivesSpec.scala       |    9 +-
 .../directives/ExecutionDirectivesSpec.scala       |  148 +-
 .../directives/FileAndResourceDirectivesSpec.scala |   21 +-
 .../directives/FileUploadDirectivesSpec.scala      |   88 +-
 .../directives/FormFieldDirectivesSpec.scala       |   33 +-
 .../server/directives/FutureDirectivesSpec.scala   |   48 +-
 .../server/directives/HeaderDirectivesSpec.scala   |    5 +-
 .../directives/MarshallingDirectivesSpec.scala     |   22 +-
 .../server/directives/MethodDirectivesSpec.scala   |   15 +-
 .../server/directives/MiscDirectivesSpec.scala     |   24 +-
 .../directives/ParameterDirectivesSpec.scala       |   24 +-
 .../server/directives/PathDirectivesSpec.scala     |  473 ++---
 .../server/directives/RangeDirectivesSpec.scala    |    5 +-
 .../server/directives/RouteDirectivesSpec.scala    |   41 +-
 .../server/directives/SchemeDirectivesSpec.scala   |   14 +-
 .../server/directives/SecurityDirectivesSpec.scala |   13 +-
 .../server/directives/TimeoutDirectivesSpec.scala  |   12 +-
 .../directives/WebSocketDirectivesSpec.scala       |   57 +-
 .../http/scaladsl/server/util/TupleOpsSpec.scala   |    6 +-
 .../unmarshalling/MultipartUnmarshallersSpec.scala |  173 +-
 .../scaladsl/unmarshalling/UnmarshallingSpec.scala |   18 +-
 .../sse/EventStreamUnmarshallingSpec.scala         |    3 +-
 .../sse/ServerSentEventParserSpec.scala            |    8 +-
 .../http/impl/settings/RoutingSettingsImpl.scala   |   21 +-
 .../settings/ServerSentEventSettingsImpl.scala     |   10 +-
 .../javadsl/common/EntityStreamingSupport.scala    |    6 +
 .../akka/http/javadsl/marshalling/Marshaller.scala |   41 +-
 .../akka/http/javadsl/server/Directives.scala      |   15 +-
 .../http/javadsl/server/ExceptionHandler.scala     |    2 +
 .../akka/http/javadsl/server/PathMatchers.scala    |   26 +-
 .../http/javadsl/server/RejectionHandler.scala     |   18 +-
 .../akka/http/javadsl/server/Rejections.scala      |   26 +-
 .../akka/http/javadsl/server/RequestContext.scala  |   19 +-
 .../scala/akka/http/javadsl/server/Route.scala     |    7 +-
 .../http/javadsl/server/RoutingJavaMapping.scala   |   36 +-
 .../server/directives/AttributeDirectives.scala    |    2 +-
 .../server/directives/BasicDirectives.scala        |   54 +-
 .../directives/CacheConditionDirectives.scala      |    7 +-
 .../server/directives/CodingDirectives.scala       |    1 +
 .../server/directives/CookieDirectives.scala       |    1 +
 .../server/directives/DebuggingDirectives.scala    |   29 +-
 .../directives/FileAndResourceDirectives.scala     |   33 +-
 .../server/directives/FileUploadDirectives.scala   |   27 +-
 .../server/directives/FormFieldDirectives.scala    |    2 +-
 .../FramedEntityStreamingDirectives.scala          |   11 +-
 .../server/directives/FutureDirectives.scala       |    6 +-
 .../server/directives/HeaderDirectives.scala       |   31 +-
 .../javadsl/server/directives/HostDirectives.scala |    1 +
 .../server/directives/MarshallingDirectives.scala  |   19 +-
 .../server/directives/ParameterDirectives.scala    |   23 +-
 .../javadsl/server/directives/PathDirectives.scala |    3 +-
 .../server/directives/RangeDirectives.scala        |    1 +
 .../javadsl/server/directives/RouteAdapter.scala   |    2 +-
 .../server/directives/RouteDirectives.scala        |   20 +-
 .../server/directives/SchemeDirectives.scala       |    1 +
 .../server/directives/SecurityDirectives.scala     |   65 +-
 .../server/directives/TimeoutDirectives.scala      |   15 +-
 .../server/directives/WebSocketDirectives.scala    |   10 +-
 .../http/javadsl/settings/RoutingSettings.scala    |   23 +-
 .../javadsl/settings/ServerSentEventSettings.scala |    1 -
 .../javadsl/unmarshalling/StringUnmarshaller.scala |   10 +-
 .../http/javadsl/unmarshalling/Unmarshaller.scala  |   52 +-
 .../sse/EventStreamUnmarshalling.scala             |    7 +-
 .../http/scaladsl/client/RequestBuilding.scala     |   14 +-
 .../client/TransformerPipelineSupport.scala        |    4 +-
 .../scala/akka/http/scaladsl/coding/Coders.scala   |   10 +-
 .../akka/http/scaladsl/coding/DataMapper.scala     |    5 +-
 .../scala/akka/http/scaladsl/coding/Decoder.scala  |    4 +-
 .../scala/akka/http/scaladsl/coding/Deflate.scala  |    3 +-
 .../http/scaladsl/coding/DeflateCompressor.scala   |   12 +-
 .../scala/akka/http/scaladsl/coding/Encoder.scala  |    9 +-
 .../scala/akka/http/scaladsl/coding/Gzip.scala     |    3 +-
 .../akka/http/scaladsl/coding/GzipCompressor.scala |    6 +-
 .../common/CsvEntityStreamingSupport.scala         |   16 +-
 .../scaladsl/common/EntityStreamingSupport.scala   |    5 +-
 .../common/JsonEntityStreamingSupport.scala        |   16 +-
 .../akka/http/scaladsl/common/NameReceptacle.scala |   13 +-
 .../akka/http/scaladsl/common/StrictForm.scala     |  105 +-
 .../scaladsl/marshalling/GenericMarshallers.scala  |    5 +-
 .../akka/http/scaladsl/marshalling/Marshal.scala   |   19 +-
 .../http/scaladsl/marshalling/Marshaller.scala     |   49 +-
 .../marshalling/MultipartMarshallers.scala         |    7 +-
 .../PredefinedToEntityMarshallers.scala            |    9 +-
 .../PredefinedToRequestMarshallers.scala           |    9 +-
 .../PredefinedToResponseMarshallers.scala          |   93 +-
 .../marshalling/ToResponseMarshallable.scala       |    2 +-
 .../akka/http/scaladsl/marshalling/package.scala   |    4 +-
 .../http/scaladsl/server/ContentNegotation.scala   |   10 +-
 .../akka/http/scaladsl/server/Directive.scala      |   48 +-
 .../akka/http/scaladsl/server/Directives.scala     |   52 +-
 .../http/scaladsl/server/ExceptionHandler.scala    |   40 +-
 .../scala/akka/http/scaladsl/server/HttpApp.scala  |   11 +-
 .../akka/http/scaladsl/server/PathMatcher.scala    |   47 +-
 .../akka/http/scaladsl/server/Rejection.scala      |   82 +-
 .../http/scaladsl/server/RejectionHandler.scala    |   57 +-
 .../akka/http/scaladsl/server/RequestContext.scala |    9 +-
 .../http/scaladsl/server/RequestContextImpl.scala  |   42 +-
 .../scala/akka/http/scaladsl/server/Route.scala    |   54 +-
 .../http/scaladsl/server/RouteConcatenation.scala  |    1 +
 .../akka/http/scaladsl/server/RouteResult.scala    |   30 +-
 .../akka/http/scaladsl/server/RoutingLog.scala     |    2 +-
 .../server/directives/BasicDirectives.scala        |   34 +-
 .../directives/CacheConditionDirectives.scala      |    7 +-
 .../server/directives/CodingDirectives.scala       |    2 +-
 .../server/directives/DebuggingDirectives.scala    |   12 +-
 .../server/directives/ExecutionDirectives.scala    |   12 +-
 .../directives/FileAndResourceDirectives.scala     |   50 +-
 .../server/directives/FileUploadDirectives.scala   |    4 +-
 .../server/directives/FormFieldDirectives.scala    |  110 +-
 .../FramedEntityStreamingDirectives.scala          |    6 +-
 .../server/directives/HeaderDirectives.scala       |   35 +-
 .../server/directives/HostDirectives.scala         |    6 +-
 .../server/directives/MarshallingDirectives.scala  |   18 +-
 .../server/directives/MethodDirectives.scala       |    8 +-
 .../server/directives/MiscDirectives.scala         |   15 +-
 .../server/directives/ParameterDirectives.scala    |   60 +-
 .../server/directives/PathDirectives.scala         |    6 +-
 .../server/directives/RangeDirectives.scala        |   13 +-
 .../server/directives/RouteDirectives.scala        |   37 +-
 .../server/directives/SecurityDirectives.scala     |   74 +-
 .../server/directives/WebSocketDirectives.scala    |    5 +-
 .../http/scaladsl/server/util/BinaryPolyFunc.scala |    1 -
 .../akka/http/scaladsl/server/util/Tuple.scala     |   28 +-
 .../akka/http/scaladsl/server/util/TupleOps.scala  |    3 +-
 .../http/scaladsl/settings/RoutingSettings.scala   |   30 +-
 .../unmarshalling/GenericUnmarshallers.scala       |   23 +-
 .../unmarshalling/MultipartUnmarshallers.scala     |  132 +-
 .../PredefinedFromEntityUnmarshallers.scala        |   13 +-
 .../PredefinedFromStringUnmarshallers.scala        |    8 +-
 .../http/scaladsl/unmarshalling/Unmarshal.scala    |    1 +
 .../http/scaladsl/unmarshalling/Unmarshaller.scala |   53 +-
 .../akka/http/scaladsl/unmarshalling/package.scala |    4 +-
 .../sse/EventStreamUnmarshalling.scala             |   18 +-
 .../scaladsl/unmarshalling/sse/LineParser.scala    |   10 +-
 .../unmarshalling/sse/ServerSentEventParser.scala  |   16 +-
 .../impl/engine/http2/H2SpecIntegrationSpec.scala  |   22 +-
 .../http/impl/engine/http2/H2cUpgradeSpec.scala    |   18 +-
 .../impl/engine/http2/HPackEncodingSupport.scala   |   17 +-
 .../http/impl/engine/http2/HPackSpecExamples.scala |    9 +-
 .../impl/engine/http2/Http2ClientServerSpec.scala  |   48 +-
 .../http/impl/engine/http2/Http2ClientSpec.scala   |  834 ++++----
 .../impl/engine/http2/Http2FrameHpackSupport.scala |   32 +-
 .../http/impl/engine/http2/Http2FrameProbe.scala   |   36 +-
 .../http/impl/engine/http2/Http2FrameSending.scala |    9 +-
 .../engine/http2/Http2PersistentClientSpec.scala   |  202 +-
 .../impl/engine/http2/Http2ServerDemuxSpec.scala   |   11 +-
 .../http/impl/engine/http2/Http2ServerSpec.scala   | 2118 ++++++++++----------
 .../engine/http2/HttpMessageRenderingSpec.scala    |   41 +-
 .../http/impl/engine/http2/PriorityTreeSpec.scala  |   81 +-
 .../impl/engine/http2/ProtocolSwitchSpec.scala     |   21 +-
 .../impl/engine/http2/RequestParsingSpec.scala     |  174 +-
 .../http/impl/engine/http2/TelemetrySpiSpec.scala  |   84 +-
 .../http/impl/engine/http2/WindowTracking.scala    |    3 +-
 .../impl/engine/http2/WithPriorKnowledgeSpec.scala |    9 +-
 .../engine/http2/framing/Http2FramingSpec.scala    |   29 +-
 .../akka/http/impl/engine/http2/package.scala      |    7 +-
 .../scala/akka/http/scaladsl/Http2ServerTest.scala |   12 +-
 .../main/scala/akka/parboiled2/CharPredicate.scala |   26 +-
 .../src/main/scala/akka/parboiled2/CharUtils.scala |    5 +-
 .../akka/parboiled2/DynamicRuleDispatch.scala      |    6 +-
 .../scala/akka/parboiled2/ErrorFormatter.scala     |   26 +-
 .../main/scala/akka/parboiled2/ParseError.scala    |   10 +-
 .../src/main/scala/akka/parboiled2/Parser.scala    |   59 +-
 .../main/scala/akka/parboiled2/ParserInput.scala   |   10 +-
 .../src/main/scala/akka/parboiled2/Rule.scala      |   27 +-
 .../scala/akka/parboiled2/RuleDSLActions.scala     |    3 +-
 .../scala/akka/parboiled2/RuleDSLCombinators.scala |   13 +-
 .../akka/parboiled2/support/ActionOpsSupport.scala |   32 +-
 .../akka/parboiled2/support/OpTreeContext.scala    |  128 +-
 .../scala/akka/parboiled2/support/RunResult.scala  |   17 +-
 .../scala/akka/parboiled2/support/Unpack.scala     |    2 +-
 .../src/main/scala/akka/shapeless/ops/hlists.scala |   10 +-
 build.sbt                                          |  133 +-
 .../scala/docs/ApiMayChangeDocCheckerSpec.scala    |    4 +-
 docs/src/test/scala/docs/CompileOnlySpec.scala     |    1 +
 .../scala/docs/http/scaladsl/Http2ClientApp.scala  |   29 +-
 .../test/scala/docs/http/scaladsl/Http2Spec.scala  |   23 +-
 .../scaladsl/HttpClientDecodingExampleSpec.scala   |    4 +-
 .../docs/http/scaladsl/HttpClientExampleSpec.scala |   72 +-
 .../HttpRequestDetailedStringExampleSpec.scala     |    6 +-
 .../HttpResponseDetailedStringExampleSpec.scala    |    6 +-
 .../docs/http/scaladsl/HttpServerExampleSpec.scala |   69 +-
 .../docs/http/scaladsl/HttpServerHighLevel.scala   |    3 +-
 .../HttpServerStreamingRandomNumbers.scala         |    6 +-
 .../scaladsl/HttpServerWithActorInteraction.scala  |    3 +-
 .../http/scaladsl/HttpServerWithActorsSample.scala |   28 +-
 .../docs/http/scaladsl/HttpsExamplesSpec.scala     |    4 +-
 .../scala/docs/http/scaladsl/MarshalSpec.scala     |    4 +-
 .../test/scala/docs/http/scaladsl/ModelSpec.scala  |   33 +-
 .../docs/http/scaladsl/RouteSealExampleSpec.scala  |    7 +-
 .../scaladsl/ServerSentEventsExampleSpec.scala     |    8 +-
 .../docs/http/scaladsl/SprayJsonExample.scala      |    3 +-
 .../docs/http/scaladsl/SprayJsonExampleSpec.scala  |    7 +-
 .../http/scaladsl/SprayJsonPrettyMarshalSpec.scala |   12 +-
 .../scala/docs/http/scaladsl/UnmarshalSpec.scala   |    4 +-
 .../http/scaladsl/WebSocketClientExampleSpec.scala |   30 +-
 .../server/AkkaHttp1020MigrationSpec.scala         |    8 +-
 .../server/BlockingInHttpExamplesSpec.scala        |   10 +-
 .../server/CaseClassExtractionExamplesSpec.scala   |   22 +-
 .../scaladsl/server/DirectiveExamplesSpec.scala    |   48 +-
 .../scaladsl/server/FileUploadExamplesSpec.scala   |   13 +-
 .../server/FullSpecs2TestKitExampleSpec.scala      |    3 +-
 .../scaladsl/server/FullTestKitExampleSpec.scala   |    3 +-
 .../scaladsl/server/HttpsServerExampleSpec.scala   |   18 +-
 .../server/RejectionHandlerExamplesSpec.scala      |   31 +-
 .../server/ServerShutdownExampleSpec.scala         |    2 +-
 .../scaladsl/server/WebSocketExampleSpec.scala     |   30 +-
 .../AttributeDirectivesExamplesSpec.scala          |   20 +-
 .../directives/BasicDirectivesExamplesSpec.scala   |  187 +-
 .../directives/CachingDirectivesExamplesSpec.scala |   24 +-
 .../directives/CodingDirectivesExamplesSpec.scala  |   37 +-
 .../directives/CookieDirectivesExamplesSpec.scala  |   21 +-
 .../directives/CustomDirectivesExamplesSpec.scala  |   28 +-
 .../server/directives/CustomHttpMethodSpec.scala   |   11 +-
 .../DebuggingDirectivesExamplesSpec.scala          |   32 +-
 .../ExecutionDirectivesExamplesSpec.scala          |   10 +-
 .../FileAndResourceDirectivesExamplesSpec.scala    |   31 +-
 .../FileUploadDirectivesExamplesSpec.scala         |   17 +-
 .../FormFieldDirectivesExamplesSpec.scala          |   23 +-
 .../directives/FutureDirectivesExamplesSpec.scala  |   25 +-
 .../directives/HeaderDirectivesExamplesSpec.scala  |   76 +-
 .../directives/HostDirectivesExamplesSpec.scala    |   25 +-
 .../directives/JsonStreamingExamplesSpec.scala     |   81 +-
 .../directives/JsonStreamingFullExamples.scala     |   13 +-
 .../MarshallingDirectivesExamplesSpec.scala        |   49 +-
 .../directives/MethodDirectivesExamplesSpec.scala  |   46 +-
 .../directives/MiscDirectivesExamplesSpec.scala    |   45 +-
 .../ParameterDirectivesExamplesSpec.scala          |   50 +-
 .../directives/PathDirectivesExamplesSpec.scala    |  110 +-
 .../directives/RangeDirectivesExamplesSpec.scala   |    6 +-
 .../RespondWithDirectivesExamplesSpec.scala        |   16 +-
 .../directives/RouteDirectivesExamplesSpec.scala   |   41 +-
 .../directives/SchemeDirectivesExamplesSpec.scala  |   11 +-
 .../SecurityDirectivesExamplesSpec.scala           |  170 +-
 .../server/directives/StyleGuideExamplesSpec.scala |   42 +-
 .../directives/TimeoutDirectivesExamplesSpec.scala |   37 +-
 .../WebSocketDirectivesExamplesSpec.scala          |   70 +-
 project/AkkaDependency.scala                       |   25 +-
 project/AutomaticModuleName.scala                  |    7 +-
 project/Common.scala                               |    7 +-
 project/CopyrightHeader.scala                      |   12 +-
 project/Dependencies.scala                         |   64 +-
 project/Doc.scala                                  |   56 +-
 project/MiMa.scala                                 |    8 +-
 project/MultiNode.scala                            |   22 +-
 project/ParadoxSupport.scala                       |   22 +-
 project/Publish.scala                              |   10 +-
 project/SbtInternalAccess.scala                    |   10 +-
 project/ValidatePullRequest.scala                  |  192 +-
 project/VersionGenerator.scala                     |    3 +-
 594 files changed, 13584 insertions(+), 11399 deletions(-)

diff --git a/akka-http-bench-jmh/src/main/scala/akka/BenchRunner.scala b/akka-http-bench-jmh/src/main/scala/akka/BenchRunner.scala
index 29b6ba421..8873fbf6a 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/BenchRunner.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/BenchRunner.scala
@@ -27,7 +27,8 @@ object BenchRunner {
 
     val report = results.asScala.map { result: RunResult =>
       val bench = result.getParams.getBenchmark
-      val params = result.getParams.getParamsKeys.asScala.map(key => s"$key=${result.getParams.getParam(key)}").mkString("_")
+      val params =
+        result.getParams.getParamsKeys.asScala.map(key => s"$key=${result.getParams.getParam(key)}").mkString("_")
       val score = result.getAggregatedResult.getPrimaryResult.getScore.round
       val unit = result.getAggregatedResult.getPrimaryResult.getScoreUnit
       s"\t${bench}_${params}\t$score\t$unit"
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/CommonBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/CommonBenchmark.scala
index e1e12e1f0..9a8a8446e 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/CommonBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/CommonBenchmark.scala
@@ -18,18 +18,18 @@ import org.openjdk.jmh.annotations.Warmup
 @State(Scope.Thread)
 @Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
 @Measurement(iterations = 5, time = 2, timeUnit = TimeUnit.SECONDS)
-@Fork(value = 3, jvmArgs = Array(
-  "-server",
-  "-Xms1g",
-  "-Xmx1g",
-  "-XX:NewSize=500m",
-  "-XX:MaxNewSize=500m",
-  "-XX:InitialCodeCacheSize=512m",
-  "-XX:ReservedCodeCacheSize=512m",
-  "-XX:+UseParallelGC",
-  "-XX:-UseBiasedLocking",
-  "-XX:+AlwaysPreTouch"
-))
+@Fork(value = 3,
+  jvmArgs = Array(
+    "-server",
+    "-Xms1g",
+    "-Xmx1g",
+    "-XX:NewSize=500m",
+    "-XX:MaxNewSize=500m",
+    "-XX:InitialCodeCacheSize=512m",
+    "-XX:ReservedCodeCacheSize=512m",
+    "-XX:+UseParallelGC",
+    "-XX:-UseBiasedLocking",
+    "-XX:+AlwaysPreTouch"))
 @BenchmarkMode(Array(Mode.Throughput))
 @OutputTimeUnit(TimeUnit.SECONDS)
 abstract class CommonBenchmark
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ConnectionPoolBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ConnectionPoolBenchmark.scala
index 1c368ec94..4f1125c57 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ConnectionPoolBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ConnectionPoolBenchmark.scala
@@ -75,13 +75,13 @@ class ConnectionPoolBenchmark extends CommonBenchmark {
         |Date: Wed, 01 Jul 2020 13:26:33 GMT
         |Content-Length: 0
         |
-        |""".stripMarginWithNewline("\r\n")
-    )
+        |""".stripMarginWithNewline("\r\n"))
     val endOfRequest = ByteString("\r\n\r\n")
     // a transport that implements a complete HTTP server (yes, really, see below)
     val clientTransport =
       new ClientTransport {
-        override def connectTo(host: String, port: Int, settings: ClientConnectionSettings)(implicit system: ActorSystem): Flow[ByteString, ByteString, Future[Http.OutgoingConnection]] =
+        override def connectTo(host: String, port: Int, settings: ClientConnectionSettings)(
+            implicit system: ActorSystem): Flow[ByteString, ByteString, Future[Http.OutgoingConnection]] =
           Flow[ByteString]
             // currently not needed because request will be sent in single chunk
             // .via(Framing.delimiter(ByteString("\r\n\r\n"), 1000))
@@ -99,8 +99,7 @@ class ConnectionPoolBenchmark extends CommonBenchmark {
       }
     poolSettings =
       ConnectionPoolSettings(system).withConnectionSettings(
-        ClientConnectionSettings(system).withTransport(clientTransport)
-      )
+        ClientConnectionSettings(system).withTransport(clientTransport))
   }
 
   @TearDown
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HeaderParserBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HeaderParserBenchmark.scala
index be03071c8..e0c019aec 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HeaderParserBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HeaderParserBenchmark.scala
@@ -49,8 +49,7 @@ private[engine] class HeaderParserBenchmark {
     val settings = ParserSettingsImpl.fromSubConfig(root, root.getConfig("akka.http.server.parsing"))
     if (withCustomMediaTypes == "no") settings
     else settings.withCustomMediaTypes(
-      MediaType.customWithOpenCharset("application", "json")
-    )
+      MediaType.customWithOpenCharset("application", "json"))
   }
 
   @TearDown
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HttpEntityBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HttpEntityBenchmark.scala
index 7ca9f5bcf..1f5bbf364 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HttpEntityBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/HttpEntityBenchmark.scala
@@ -53,8 +53,7 @@ class HttpEntityBenchmark extends CommonBenchmark {
         HttpEntity.Default(
           ContentTypes.`application/octet-stream`,
           10 * chunk.size,
-          Source.repeat(chunk).take(10)
-        )
+          Source.repeat(chunk).take(10))
     }
   }
 
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ServerProcessingBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ServerProcessingBenchmark.scala
index 3bc3af775..fce3b06ad 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ServerProcessingBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ServerProcessingBenchmark.scala
@@ -54,9 +54,9 @@ class ServerProcessingBenchmark extends CommonBenchmark {
     system = ActorSystem("AkkaHttpBenchmarkSystem", config)
     mat = ActorMaterializer()
     httpFlow =
-      Flow[HttpRequest].map(_ => response) join
-        (HttpServerBluePrint(ServerSettings(system), NoLogging, false, Http().dateHeaderRendering) atop
-          TLSPlacebo())
+      Flow[HttpRequest].map(_ => response).join(
+        HttpServerBluePrint(ServerSettings(system), NoLogging, false, Http().dateHeaderRendering).atop(
+          TLSPlacebo()))
   }
 
   @TearDown
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/StreamServerProcessingBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/StreamServerProcessingBenchmark.scala
index b76b216da..d3d7751ed 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/StreamServerProcessingBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/StreamServerProcessingBenchmark.scala
@@ -58,7 +58,8 @@ class StreamServerProcessingBenchmark extends CommonBenchmark {
       .runWith(Sink.fold(0L)(_ + _.size))
       .onComplete { res =>
         latch.countDown()
-        require(res.filter(_ >= totalExpectedBytes).isSuccess, s"Expected at least $totalExpectedBytes but only got $res")
+        require(res.filter(_ >= totalExpectedBytes).isSuccess,
+          s"Expected at least $totalExpectedBytes but only got $res")
       }(system.dispatcher)
 
     latch.await()
@@ -83,29 +84,27 @@ class StreamServerProcessingBenchmark extends CommonBenchmark {
 
     val entity = entityType match {
       case "strict" =>
-        HttpEntity.Strict(ContentTypes.`application/octet-stream`, ByteString(new Array[Byte](bytesPerChunk.toInt * numChunks.toInt)))
+        HttpEntity.Strict(ContentTypes.`application/octet-stream`,
+          ByteString(new Array[Byte](bytesPerChunk.toInt * numChunks.toInt)))
       case "chunked" =>
         HttpEntity.Chunked.fromData(
           ContentTypes.`application/octet-stream`,
-          streamedBytes
-        )
+          streamedBytes)
       case "default" =>
         HttpEntity.Default(
           ContentTypes.`application/octet-stream`,
           bytesPerChunk.toInt * numChunks.toInt,
-          streamedBytes
-        )
+          streamedBytes)
     }
 
     val response = HttpResponse(
       headers = headers.Server("akka-http-bench") :: Nil,
-      entity = entity
-    )
+      entity = entity)
 
     httpFlow =
-      Flow[HttpRequest].map(_ => response) join
-        (HttpServerBluePrint(ServerSettings(system), NoLogging, false, Http().dateHeaderRendering) atop
-          TLSPlacebo())
+      Flow[HttpRequest].map(_ => response).join(
+        HttpServerBluePrint(ServerSettings(system), NoLogging, false, Http().dateHeaderRendering).atop(
+          TLSPlacebo()))
   }
 
   @TearDown
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ClientServerBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ClientServerBenchmark.scala
index 7e3c888bd..84e2e19cc 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ClientServerBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ClientServerBenchmark.scala
@@ -68,16 +68,19 @@ class H2ClientServerBenchmark extends CommonBenchmark with H2RequestResponseBenc
     implicit val ec = system.dispatcher
     val http1 = Flow[SslTlsInbound].mapAsync(1)(_ => {
       Future.failed[SslTlsOutbound](new IllegalStateException("Failed h2 detection"))
-    }).mapMaterializedValue(_ => new ServerTerminator {
-      override def terminate(deadline: FiniteDuration)(implicit ex: ExecutionContext): Future[Http.HttpTerminated] = ???
-    })
+    }).mapMaterializedValue(_ =>
+      new ServerTerminator {
+        override def terminate(deadline: FiniteDuration)(implicit ex: ExecutionContext): Future[Http.HttpTerminated] =
+          ???
+      })
     val http2 =
       Http2Blueprint.handleWithStreamIdHeader(1)(req => {
         req.discardEntityBytes().future.map(_ => response)
       })(system.dispatcher)
         .joinMat(Http2Blueprint.serverStackTls(settings, log, NoOpTelemetry, Http().dateHeaderRendering))(Keep.right)
     val server: Flow[ByteString, ByteString, Any] = Http2.priorKnowledge(http1, http2)
-    val client: BidiFlow[HttpRequest, ByteString, ByteString, HttpResponse, NotUsed] = Http2Blueprint.clientStack(ClientConnectionSettings(system), log, NoOpTelemetry)
+    val client: BidiFlow[HttpRequest, ByteString, ByteString, HttpResponse, NotUsed] =
+      Http2Blueprint.clientStack(ClientConnectionSettings(system), log, NoOpTelemetry)
     httpFlow = client.join(server)
   }
 
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2RequestResponseBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2RequestResponseBenchmark.scala
index b0a9e9818..ffb419fe9 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2RequestResponseBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2RequestResponseBenchmark.scala
@@ -7,7 +7,15 @@ package akka.http.impl.engine.http2
 import akka.http.impl.engine.http2.FrameEvent.{ DataFrame, HeadersFrame }
 import akka.http.impl.engine.http2.framing.FrameRenderer
 import akka.http.scaladsl.model.HttpEntity.{ Chunk, LastChunk }
-import akka.http.scaladsl.model.{ AttributeKeys, ContentTypes, HttpEntity, HttpMethods, HttpRequest, HttpResponse, Trailer }
+import akka.http.scaladsl.model.{
+  AttributeKeys,
+  ContentTypes,
+  HttpEntity,
+  HttpMethods,
+  HttpRequest,
+  HttpResponse,
+  Trailer
+}
 import akka.http.scaladsl.model.headers.RawHeader
 import akka.stream.scaladsl.Source
 import akka.util.ByteString
@@ -31,7 +39,7 @@ trait H2RequestResponseBenchmark extends HPackEncodingSupport {
     FrameRenderer.render(HeadersFrame(streamId, endStream = true, endHeaders = true, headerBlock(streamId), None))
   private def requestWithSingleFrameBody(streamId: Int): ByteString =
     FrameRenderer.render(HeadersFrame(streamId, endStream = false, endHeaders = true, headerBlock(streamId), None)) ++
-      FrameRenderer.render(DataFrame(streamId, endStream = true, requestBytes))
+    FrameRenderer.render(DataFrame(streamId, endStream = true, requestBytes))
 
   private var firstRequestHeaderBlock: ByteString = _
   // use header compression for subsequent requests
@@ -61,7 +69,8 @@ trait H2RequestResponseBenchmark extends HPackEncodingSupport {
         request = HttpRequest(method = HttpMethods.POST, uri = "http://www.example.com/")
         requestDataCreator = requestWithoutBody _
       case "singleframe" =>
-        request = HttpRequest(method = HttpMethods.POST, uri = "http://www.example.com/", entity = HttpEntity(requestBytes))
+        request =
+          HttpRequest(method = HttpMethods.POST, uri = "http://www.example.com/", entity = HttpEntity(requestBytes))
         requestDataCreator = requestWithSingleFrameBody _
     }
     initRequestHeaderBlocks()
@@ -80,7 +89,8 @@ trait H2RequestResponseBenchmark extends HPackEncodingSupport {
           .addAttribute(AttributeKeys.trailer, Trailer(trailerHeader :: Nil))
       case "chunked" =>
         baseResponse
-          .withEntity(HttpEntity.Chunked(ContentTypes.`text/plain(UTF-8)`, Source(Chunk(responseBody) :: LastChunk(trailer = trailerHeader :: Nil) :: Nil)))
+          .withEntity(HttpEntity.Chunked(ContentTypes.`text/plain(UTF-8)`,
+            Source(Chunk(responseBody) :: LastChunk(trailer = trailerHeader :: Nil) :: Nil)))
       case "strict" =>
         baseResponse
           .withEntity(HttpEntity.Strict(ContentTypes.`text/plain(UTF-8)`, responseBody))
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ServerProcessingBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ServerProcessingBenchmark.scala
index 48b6aadea..dd200996d 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ServerProcessingBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/http2/H2ServerProcessingBenchmark.scala
@@ -67,9 +67,11 @@ class H2ServerProcessingBenchmark extends CommonBenchmark with H2RequestResponse
     implicit val ec = system.dispatcher
     val http1 = Flow[SslTlsInbound].mapAsync(1)(_ => {
       Future.failed[SslTlsOutbound](new IllegalStateException("Failed h2 detection"))
-    }).mapMaterializedValue(_ => new ServerTerminator {
-      override def terminate(deadline: FiniteDuration)(implicit ex: ExecutionContext): Future[Http.HttpTerminated] = ???
-    })
+    }).mapMaterializedValue(_ =>
+      new ServerTerminator {
+        override def terminate(deadline: FiniteDuration)(implicit ex: ExecutionContext): Future[Http.HttpTerminated] =
+          ???
+      })
     val http2 =
       Http2Blueprint.handleWithStreamIdHeader(1)(req => {
         req.discardEntityBytes().future.map(_ => response)
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ws/MaskingBench.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ws/MaskingBench.scala
index fc5e84f9d..851f5801c 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ws/MaskingBench.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/engine/ws/MaskingBench.scala
@@ -11,7 +11,7 @@ import akka.http.CommonBenchmark
 
 class MaskingBench extends CommonBenchmark {
   val data = ByteString(new Array[Byte](10000))
-  val mask = 0xfedcba09
+  val mask = 0xFEDCBA09
 
   @Benchmark
   def benchRequestProcessing(): (ByteString, Int) =
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/impl/model/parser/UriParserBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/impl/model/parser/UriParserBenchmark.scala
index e240dd25e..2b4d8231a 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/impl/model/parser/UriParserBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/impl/model/parser/UriParserBenchmark.scala
@@ -18,8 +18,7 @@ class UriParserBenchmark {
 
   @Param(Array(
     "http://any.hostname?param1=111&amp;param2=222",
-    "http://any.hostname?param1=111&amp;param2=222&param3=333&param4=444&param5=555&param6=666&param7=777&param8=888&param9=999"
-  ))
+    "http://any.hostname?param1=111&amp;param2=222&param3=333&param4=444&param5=555&param6=666&param7=777&param8=888&param9=999"))
   var url = ""
 
   @Benchmark
diff --git a/akka-http-bench-jmh/src/main/scala/akka/http/scaladsl/unmarshalling/sse/LineParserBenchmark.scala b/akka-http-bench-jmh/src/main/scala/akka/http/scaladsl/unmarshalling/sse/LineParserBenchmark.scala
index 2981f9080..c2f4a0883 100644
--- a/akka-http-bench-jmh/src/main/scala/akka/http/scaladsl/unmarshalling/sse/LineParserBenchmark.scala
+++ b/akka-http-bench-jmh/src/main/scala/akka/http/scaladsl/unmarshalling/sse/LineParserBenchmark.scala
@@ -41,8 +41,7 @@ class LineParserBenchmark {
     "1024",
     "2048",
     "4096",
-    "8192"
-  ))
+    "8192"))
   var chunkSize = 0
 
   lazy val line = ByteString("x" * lineSize + "\n")
diff --git a/akka-http-caching/src/main/scala/akka/http/caching/LfuCache.scala b/akka-http-caching/src/main/scala/akka/http/caching/LfuCache.scala
index 951ef5b2f..889f8ec07 100755
--- a/akka-http-caching/src/main/scala/akka/http/caching/LfuCache.scala
+++ b/akka-http-caching/src/main/scala/akka/http/caching/LfuCache.scala
@@ -38,7 +38,8 @@ object LfuCache {
     require(settings.maxCapacity >= 0, "maxCapacity must not be negative")
     require(settings.initialCapacity <= settings.maxCapacity, "initialCapacity must be <= maxCapacity")
 
-    if (settings.timeToLive.isFinite || settings.timeToIdle.isFinite) expiringLfuCache(settings.maxCapacity, settings.initialCapacity, settings.timeToLive, settings.timeToIdle)
+    if (settings.timeToLive.isFinite || settings.timeToIdle.isFinite)
+      expiringLfuCache(settings.maxCapacity, settings.initialCapacity, settings.timeToLive, settings.timeToIdle)
     else simpleLfuCache(settings.maxCapacity, settings.initialCapacity)
   }
 
@@ -67,7 +68,7 @@ object LfuCache {
   }
 
   private def expiringLfuCache[K, V](maxCapacity: Long, initialCapacity: Int,
-                                     timeToLive: Duration, timeToIdle: Duration): LfuCache[K, V] = {
+      timeToLive: Duration, timeToIdle: Duration): LfuCache[K, V] = {
     require(
       !timeToLive.isFinite || !timeToIdle.isFinite || timeToLive >= timeToIdle,
       s"timeToLive($timeToLive) must be >= than timeToIdle($timeToIdle)")
@@ -86,7 +87,7 @@ object LfuCache {
       .initialCapacity(initialCapacity)
       .maximumSize(maxCapacity)
 
-    val store = (ttl andThen tti)(builder).buildAsync[K, V]
+    val store = ttl.andThen(tti)(builder).buildAsync[K, V]
     new LfuCache[K, V](store)
   }
 
@@ -103,7 +104,8 @@ private[caching] class LfuCache[K, V](val store: AsyncCache[K, V]) extends Cache
 
   def get(key: K): Option[Future[V]] = Option(store.getIfPresent(key)).map(_.toScala)
 
-  def apply(key: K, genValue: () => Future[V]): Future[V] = store.get(key, toJavaMappingFunction[K, V](genValue)).toScala
+  def apply(key: K, genValue: () => Future[V]): Future[V] =
+    store.get(key, toJavaMappingFunction[K, V](genValue)).toScala
 
   /**
    * Multiple call to put method for the same key may result in a race condition,
@@ -117,13 +119,14 @@ private[caching] class LfuCache[K, V](val store: AsyncCache[K, V]) extends Cache
         store.put(key, toJava(mayBeValue).toCompletableFuture)
         mayBeValue
       case _ => mayBeValue.map { value =>
-        store.put(key, toJava(Future.successful(value)).toCompletableFuture)
-        value
-      }
+          store.put(key, toJava(Future.successful(value)).toCompletableFuture)
+          value
+        }
     }
   }
 
-  def getOrLoad(key: K, loadValue: K => Future[V]): Future[V] = store.get(key, toJavaMappingFunction[K, V](loadValue)).toScala
+  def getOrLoad(key: K, loadValue: K => Future[V]): Future[V] =
+    store.get(key, toJavaMappingFunction[K, V](loadValue)).toScala
 
   def remove(key: K): Unit = store.synchronous().invalidate(key)
 
diff --git a/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/CachingSettingsImpl.scala b/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/CachingSettingsImpl.scala
index 7f516842c..0ddf399a5 100644
--- a/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/CachingSettingsImpl.scala
+++ b/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/CachingSettingsImpl.scala
@@ -12,7 +12,7 @@ import com.typesafe.config.Config
 /** INTERNAL API */
 @InternalApi
 private[http] final case class CachingSettingsImpl(lfuCacheSettings: LfuCacheSettings)
-  extends CachingSettings {
+    extends CachingSettings {
   override def productPrefix = "CachingSettings"
 }
 
@@ -21,7 +21,6 @@ private[http] final case class CachingSettingsImpl(lfuCacheSettings: LfuCacheSet
 private[http] object CachingSettingsImpl extends SettingsCompanionImpl[CachingSettingsImpl]("akka.http.caching") {
   def fromSubConfig(root: Config, c: Config): CachingSettingsImpl = {
     new CachingSettingsImpl(
-      LfuCachingSettingsImpl.fromSubConfig(root, c.getConfig("lfu-cache"))
-    )
+      LfuCachingSettingsImpl.fromSubConfig(root, c.getConfig("lfu-cache")))
   }
 }
diff --git a/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/LfuCachingSettingsImpl.scala b/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/LfuCachingSettingsImpl.scala
index 2f36ec8af..dbfbd6926 100644
--- a/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/LfuCachingSettingsImpl.scala
+++ b/akka-http-caching/src/main/scala/akka/http/caching/impl/settings/LfuCachingSettingsImpl.scala
@@ -15,24 +15,24 @@ import scala.concurrent.duration.Duration
 /** INTERNAL API */
 @InternalApi
 private[http] final case class LfuCachingSettingsImpl(
-  maxCapacity:     Int,
-  initialCapacity: Int,
-  timeToLive:      Duration,
-  timeToIdle:      Duration)
-  extends LfuCacheSettings {
+    maxCapacity: Int,
+    initialCapacity: Int,
+    timeToLive: Duration,
+    timeToIdle: Duration)
+    extends LfuCacheSettings {
   override def productPrefix = "LfuCacheSettings"
 }
 
 /** INTERNAL API */
 @InternalApi
-private[http] object LfuCachingSettingsImpl extends SettingsCompanionImpl[LfuCachingSettingsImpl]("akka.http.caching.lfu-cache") {
+private[http] object LfuCachingSettingsImpl
+    extends SettingsCompanionImpl[LfuCachingSettingsImpl]("akka.http.caching.lfu-cache") {
   def fromSubConfig(root: Config, inner: Config): LfuCachingSettingsImpl = {
     val c = inner.withFallback(root.getConfig(prefix))
     new LfuCachingSettingsImpl(
       c.getInt("max-capacity"),
       c.getInt("initial-capacity"),
       c.getPotentiallyInfiniteDuration("time-to-live"),
-      c.getPotentiallyInfiniteDuration("time-to-idle")
-    )
+      c.getPotentiallyInfiniteDuration("time-to-idle"))
   }
 }
diff --git a/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/Cache.scala b/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/Cache.scala
index 11fa16216..a671a019f 100755
--- a/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/Cache.scala
+++ b/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/Cache.scala
@@ -89,11 +89,12 @@ abstract class Cache[K, V] extends akka.http.caching.javadsl.Cache[K, V] {
     futureToJava(apply(key, () => futureToScala(genValue.create())))
 
   final override def getOrFulfil(key: K, f: Procedure[CompletableFuture[V]]): CompletionStage[V] =
-    futureToJava(apply(key, promise => {
-      val completableFuture = new CompletableFuture[V]
-      f(completableFuture)
-      promise.completeWith(futureToScala(completableFuture))
-    }))
+    futureToJava(apply(key,
+      promise => {
+        val completableFuture = new CompletableFuture[V]
+        f(completableFuture)
+        promise.completeWith(futureToScala(completableFuture))
+      }))
 
   /**
    * Returns either the cached CompletionStage for the given key or the given value as a CompletionStage
diff --git a/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/LfuCacheSettings.scala b/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/LfuCacheSettings.scala
index 760b1e830..129407fed 100644
--- a/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/LfuCacheSettings.scala
+++ b/akka-http-caching/src/main/scala/akka/http/caching/scaladsl/LfuCacheSettings.scala
@@ -28,7 +28,8 @@ abstract class LfuCacheSettings private[http] () extends javadsl.LfuCacheSetting
   final def getTimeToIdle: Duration = timeToIdle
 
   override def withMaxCapacity(newMaxCapacity: Int): LfuCacheSettings = self.copy(maxCapacity = newMaxCapacity)
-  override def withInitialCapacity(newInitialCapacity: Int): LfuCacheSettings = self.copy(initialCapacity = newInitialCapacity)
+  override def withInitialCapacity(newInitialCapacity: Int): LfuCacheSettings =
+    self.copy(initialCapacity = newInitialCapacity)
   override def withTimeToLive(newTimeToLive: Duration): LfuCacheSettings = self.copy(timeToLive = newTimeToLive)
   override def withTimeToIdle(newTimeToIdle: Duration): LfuCacheSettings = self.copy(timeToIdle = newTimeToIdle)
 }
diff --git a/akka-http-caching/src/main/scala/akka/http/javadsl/server/directives/CachingDirectives.scala b/akka-http-caching/src/main/scala/akka/http/javadsl/server/directives/CachingDirectives.scala
index c0ab66994..0f0e2f8b0 100644
--- a/akka-http-caching/src/main/scala/akka/http/javadsl/server/directives/CachingDirectives.scala
+++ b/akka-http-caching/src/main/scala/akka/http/javadsl/server/directives/CachingDirectives.scala
@@ -26,14 +26,15 @@ object CachingDirectives {
    *
    * Use [[akka.japi.JavaPartialFunction]] to build the `keyer`.
    */
-  def cache[K](cache: Cache[K, RouteResult], keyer: PartialFunction[RequestContext, K], inner: Supplier[Route]) = RouteAdapter {
-    D.cache(
-      JavaMapping.toScala(cache),
-      toScalaKeyer(keyer)
-    ) { inner.get.delegate }
-  }
+  def cache[K](cache: Cache[K, RouteResult], keyer: PartialFunction[RequestContext, K], inner: Supplier[Route]) =
+    RouteAdapter {
+      D.cache(
+        JavaMapping.toScala(cache),
+        toScalaKeyer(keyer)) { inner.get.delegate }
+    }
 
-  private def toScalaKeyer[K](keyer: PartialFunction[RequestContext, K]): PartialFunction[akka.http.scaladsl.server.RequestContext, K] = {
+  private def toScalaKeyer[K](
+      keyer: PartialFunction[RequestContext, K]): PartialFunction[akka.http.scaladsl.server.RequestContext, K] = {
     case scalaRequestContext: akka.http.scaladsl.server.RequestContext => {
       val javaRequestContext = akka.http.javadsl.server.RoutingJavaMapping.RequestContext.toJava(scalaRequestContext)
       keyer(javaRequestContext)
@@ -52,12 +53,12 @@ object CachingDirectives {
    * Wraps its inner Route with caching support using the given [[Cache]] implementation and
    * keyer function. Note that routes producing streaming responses cannot be wrapped with this directive.
    */
-  def alwaysCache[K](cache: Cache[K, RouteResult], keyer: PartialFunction[RequestContext, K], inner: Supplier[Route]) = RouteAdapter {
-    D.alwaysCache(
-      JavaMapping.toScala(cache),
-      toScalaKeyer(keyer)
-    ) { inner.get.delegate }
-  }
+  def alwaysCache[K](cache: Cache[K, RouteResult], keyer: PartialFunction[RequestContext, K], inner: Supplier[Route]) =
+    RouteAdapter {
+      D.alwaysCache(
+        JavaMapping.toScala(cache),
+        toScalaKeyer(keyer)) { inner.get.delegate }
+    }
 
   /**
    * Creates an [[LfuCache]]
diff --git a/akka-http-caching/src/main/scala/akka/http/scaladsl/server/directives/CachingDirectives.scala b/akka-http-caching/src/main/scala/akka/http/scaladsl/server/directives/CachingDirectives.scala
index f18f8fcbf..f5b693d0c 100644
--- a/akka-http-caching/src/main/scala/akka/http/scaladsl/server/directives/CachingDirectives.scala
+++ b/akka-http-caching/src/main/scala/akka/http/scaladsl/server/directives/CachingDirectives.scala
@@ -34,10 +34,10 @@ trait CachingDirectives {
   def cachingProhibited: Directive0 =
     extract(_.request.headers.exists {
       case x: `Cache-Control` => x.directives.exists {
-        case `no-cache`   => true
-        case `max-age`(0) => true
-        case _            => false
-      }
+          case `no-cache`   => true
+          case `max-age`(0) => true
+          case _            => false
+        }
       case _ => false
     }).flatMap(if (_) pass else reject)
 
diff --git a/akka-http-caching/src/test/scala/akka/http/caching/ExpiringLfuCacheSpec.scala b/akka-http-caching/src/test/scala/akka/http/caching/ExpiringLfuCacheSpec.scala
index 3b4e1f8bb..d713381bd 100755
--- a/akka-http-caching/src/test/scala/akka/http/caching/ExpiringLfuCacheSpec.scala
+++ b/akka-http-caching/src/test/scala/akka/http/caching/ExpiringLfuCacheSpec.scala
@@ -48,12 +48,12 @@ class ExpiringLfuCacheSpec extends AnyWordSpec with Matchers with BeforeAndAfter
     "return Futures on uncached values during evaluation and replace these with the value afterwards" in {
       val cache = lfuCache[String]()
       val latch = new CountDownLatch(1)
-      val future1 = cache(1, (promise: Promise[String]) =>
-        Future {
-          latch.await()
-          promise.success("A")
-        }
-      )
+      val future1 = cache(1,
+        (promise: Promise[String]) =>
+          Future {
+            latch.await()
+            promise.success("A")
+          })
       val future2 = cache.get(1, () => "")
 
       latch.countDown()
@@ -133,22 +133,24 @@ class ExpiringLfuCacheSpec extends AnyWordSpec with Matchers with BeforeAndAfter
       val cache = lfuCache[Int](maxCapacity = 1000)
       // exercise the cache from 10 parallel "tracks" (threads)
       val views = Await.result(Future.traverse(Seq.tabulate(10)(identity)) { track =>
-        Future {
-          val array = Array.fill(1000)(0) // our view of the cache
-          val rand = new Random(track)
-          (1 to 10000) foreach { i =>
-            val ix = rand.nextInt(1000) // for a random index into the cache
-            val value = cache.get(ix, () => { // get (and maybe set) the cache value
-              Thread.sleep(0)
-              rand.nextInt(1000000) + 1
-            }).value.get.get // should always be Future.successful
-            if (array(ix) == 0) array(ix) = value // update our view of the cache
-            else assert(array(ix) == value, "Cache view is inconsistent (track " + track + ", iteration " + i +
-              ", index " + ix + ": expected " + array(ix) + " but is " + value)
+          Future {
+            val array = Array.fill(1000)(0) // our view of the cache
+            val rand = new Random(track)
+            (1 to 10000).foreach { i =>
+              val ix = rand.nextInt(1000) // for a random index into the cache
+              val value = cache.get(ix,
+                () => { // get (and maybe set) the cache value
+                  Thread.sleep(0)
+                  rand.nextInt(1000000) + 1
+                }).value.get.get // should always be Future.successful
+              if (array(ix) == 0) array(ix) = value // update our view of the cache
+              else assert(array(ix) == value,
+                "Cache view is inconsistent (track " + track + ", iteration " + i +
+                ", index " + ix + ": expected " + array(ix) + " but is " + value)
+            }
+            array
           }
-          array
-        }
-      }, 10.second)
+        }, 10.second)
 
       views.transpose.foreach { ints: Seq[Int] =>
         ints.filter(_ != 0).reduceLeft((a, b) => if (a == b) a else 0) should not be 0
@@ -164,7 +166,7 @@ class ExpiringLfuCacheSpec extends AnyWordSpec with Matchers with BeforeAndAfter
   }
 
   def lfuCache[T](maxCapacity: Int = 500, initialCapacity: Int = 16,
-                  timeToLive: Duration = Duration.Inf, timeToIdle: Duration = Duration.Inf): LfuCache[Int, T] = {
+      timeToLive: Duration = Duration.Inf, timeToIdle: Duration = Duration.Inf): LfuCache[Int, T] = {
     LfuCache[Int, T] {
       val settings = CachingSettings(system)
       settings.withLfuCacheSettings(
@@ -172,8 +174,7 @@ class ExpiringLfuCacheSpec extends AnyWordSpec with Matchers with BeforeAndAfter
           .withMaxCapacity(maxCapacity)
           .withInitialCapacity(initialCapacity)
           .withTimeToLive(timeToLive)
-          .withTimeToIdle(timeToIdle)
-      )
+          .withTimeToIdle(timeToIdle))
     }.asInstanceOf[LfuCache[Int, T]]
   }
 
diff --git a/akka-http-caching/src/test/scala/akka/http/scaladsl/server/directives/CachingDirectivesSpec.scala b/akka-http-caching/src/test/scala/akka/http/scaladsl/server/directives/CachingDirectivesSpec.scala
index 6fe074597..57e297a28 100644
--- a/akka-http-caching/src/test/scala/akka/http/scaladsl/server/directives/CachingDirectivesSpec.scala
+++ b/akka-http-caching/src/test/scala/akka/http/scaladsl/server/directives/CachingDirectivesSpec.scala
@@ -62,7 +62,9 @@ class CachingDirectivesSpec extends AnyWordSpec with Matchers with ScalatestRout
       Get() ~> addHeader(`Cache-Control`(`no-cache`)) ~> countingService ~> check { responseAs[String] shouldEqual "3" }
     }
     "not cache responses for GETs if the request contains a `Cache-Control: max-age=0` header" in {
-      Get() ~> addHeader(`Cache-Control`(`max-age`(0))) ~> countingService ~> check { responseAs[String] shouldEqual "4" }
+      Get() ~> addHeader(`Cache-Control`(`max-age`(0))) ~> countingService ~> check {
+        responseAs[String] shouldEqual "4"
+      }
     }
 
     "be transparent to exceptions thrown from its inner route" in {
diff --git a/akka-http-compatibility-tests/src/test/scala/akka/http/scaladsl/HostConnectionPoolCompatSpec.scala b/akka-http-compatibility-tests/src/test/scala/akka/http/scaladsl/HostConnectionPoolCompatSpec.scala
index 5719d5fad..42b3bf980 100644
--- a/akka-http-compatibility-tests/src/test/scala/akka/http/scaladsl/HostConnectionPoolCompatSpec.scala
+++ b/akka-http-compatibility-tests/src/test/scala/akka/http/scaladsl/HostConnectionPoolCompatSpec.scala
@@ -30,7 +30,7 @@ class HostConnectionPoolCompatSpec extends AkkaSpecWithMaterializer {
           .run()
 
       hcp0 shouldEqual hcp1
-      hcp0 should not equal (hcpOther)
+      (hcp0 should not).equal(hcpOther)
 
       HostConnectionPoolCompat.access(hcp0)
     }
diff --git a/akka-http-core/src/main/scala-2.13+/akka/http/ccompat/package.scala b/akka-http-core/src/main/scala-2.13+/akka/http/ccompat/package.scala
index 1dfb0d718..4787a386b 100644
--- a/akka-http-core/src/main/scala-2.13+/akka/http/ccompat/package.scala
+++ b/akka-http-core/src/main/scala-2.13+/akka/http/ccompat/package.scala
@@ -17,7 +17,9 @@ package object ccompat {
  */
 package ccompat {
   import akka.http.scaladsl.model.Uri.Query
-  trait QuerySeqOptimized extends scala.collection.immutable.LinearSeq[(String, String)] with scala.collection.StrictOptimizedLinearSeqOps[(String, String), scala.collection.immutable.LinearSeq, Query] { self: Query =>
+  trait QuerySeqOptimized extends scala.collection.immutable.LinearSeq[(String, String)]
+      with scala.collection.StrictOptimizedLinearSeqOps[(String, String), scala.collection.immutable.LinearSeq, Query] {
+    self: Query =>
     override protected def fromSpecific(coll: IterableOnce[(String, String)]): Query =
       Query(coll.iterator.to(Seq): _*)
 
diff --git a/akka-http-core/src/main/scala-2.13+/akka/http/scaladsl/util/FastFuture.scala b/akka-http-core/src/main/scala-2.13+/akka/http/scaladsl/util/FastFuture.scala
index 044a855b0..55b8f3915 100644
--- a/akka-http-core/src/main/scala-2.13+/akka/http/scaladsl/util/FastFuture.scala
+++ b/akka-http-core/src/main/scala-2.13+/akka/http/scaladsl/util/FastFuture.scala
@@ -44,21 +44,21 @@ class FastFuture[A](val future: Future[A]) extends AnyVal {
       case FulfilledFuture(a) => strictTransform(a, s)
       case ErrorFuture(e)     => strictTransform(e, f)
       case _ => future.value match {
-        case None =>
-          val p = Promise[B]()
-          future.onComplete {
-            case Success(a) => p completeWith strictTransform(a, s)
-            case Failure(e) => p completeWith strictTransform(e, f)
-          }
-          p.future
-        case Some(Success(a)) => strictTransform(a, s)
-        case Some(Failure(e)) => strictTransform(e, f)
-      }
+          case None =>
+            val p = Promise[B]()
+            future.onComplete {
+              case Success(a) => p.completeWith(strictTransform(a, s))
+              case Failure(e) => p.completeWith(strictTransform(e, f))
+            }
+            p.future
+          case Some(Success(a)) => strictTransform(a, s)
+          case Some(Failure(e)) => strictTransform(e, f)
+        }
     }
   }
 
   def recover[B >: A](pf: PartialFunction[Throwable, B])(implicit ec: ExecutionContext): Future[B] =
-    transformWith(FastFuture.successful, t => if (pf isDefinedAt t) FastFuture.successful(pf(t)) else future)
+    transformWith(FastFuture.successful, t => if (pf.isDefinedAt(t)) FastFuture.successful(pf(t)) else future)
 
   def recoverWith[B >: A](pf: PartialFunction[Throwable, Future[B]])(implicit ec: ExecutionContext): Future[B] =
     transformWith(FastFuture.successful, t => pf.applyOrElse(t, (_: Throwable) => future))
@@ -79,9 +79,11 @@ object FastFuture {
     def isCompleted = true
     def result(atMost: Duration)(implicit permit: CanAwait) = a
     def ready(atMost: Duration)(implicit permit: CanAwait) = this
-    def transform[S](f: scala.util.Try[A] => scala.util.Try[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transform[S](f: scala.util.Try[A] => scala.util.Try[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       FastFuture(f(Success(a)))
-    def transformWith[S](f: scala.util.Try[A] => scala.concurrent.Future[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transformWith[S](f: scala.util.Try[A] => scala.concurrent.Future[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       new FastFuture(this).transformWith(f)
   }
   private case class ErrorFuture(error: Throwable) extends Future[Nothing] {
@@ -90,9 +92,11 @@ object FastFuture {
     def isCompleted = true
     def result(atMost: Duration)(implicit permit: CanAwait) = throw error
     def ready(atMost: Duration)(implicit permit: CanAwait) = this
-    def transform[S](f: scala.util.Try[Nothing] => scala.util.Try[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transform[S](f: scala.util.Try[Nothing] => scala.util.Try[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       FastFuture(f(Failure(error)))
-    def transformWith[S](f: scala.util.Try[Nothing] => scala.concurrent.Future[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transformWith[S](f: scala.util.Try[Nothing] => scala.concurrent.Future[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       new FastFuture(this).transformWith(f)
   }
 
@@ -100,7 +104,8 @@ object FastFuture {
     def fast: FastFuture[T] = new FastFuture[T](future)
   }
 
-  def sequence[T, M[_] <: IterableOnce[_]](in: M[Future[T]])(implicit cbf: BuildFrom[M[Future[T]], T, M[T]], executor: ExecutionContext): Future[M[T]] =
+  def sequence[T, M[_] <: IterableOnce[_]](in: M[Future[T]])(implicit cbf: BuildFrom[M[Future[T]], T, M[T]],
+      executor: ExecutionContext): Future[M[T]] =
     in.iterator.foldLeft(successful(cbf.newBuilder(in))) {
       (fr, fa) => for (r <- fr.fast; a <- fa.asInstanceOf[Future[T]].fast) yield r += a
     }.fast.map(_.result())
@@ -113,9 +118,10 @@ object FastFuture {
   def reduce[T, R >: T](futures: IterableOnce[Future[T]])(op: (R, T) => R)(implicit executor: ExecutionContext): Future[R] =
     if (futures.isEmpty) failed(new NoSuchElementException("reduce attempted on empty collection"))
     else sequence(futures).fast.map(_ reduceLeft op)
-*/
+   */
 
-  def traverse[A, B, M[_] <: IterableOnce[_]](in: M[A])(fn: A => Future[B])(implicit cbf: BuildFrom[M[A], B, M[B]], executor: ExecutionContext): Future[M[B]] =
+  def traverse[A, B, M[_] <: IterableOnce[_]](in: M[A])(fn: A => Future[B])(implicit cbf: BuildFrom[M[A], B, M[B]],
+      executor: ExecutionContext): Future[M[B]] =
     in.iterator.foldLeft(successful(cbf.newBuilder(in))) { (fr, a) =>
       val fb = fn(a.asInstanceOf[A])
       for (r <- fr.fast; b <- fb.fast) yield r += b
diff --git a/akka-http-core/src/main/scala-2.13-/akka/http/ccompat/package.scala b/akka-http-core/src/main/scala-2.13-/akka/http/ccompat/package.scala
index d8298a2bf..581393a45 100644
--- a/akka-http-core/src/main/scala-2.13-/akka/http/ccompat/package.scala
+++ b/akka-http-core/src/main/scala-2.13-/akka/http/ccompat/package.scala
@@ -5,7 +5,7 @@
 package akka.http
 
 import scala.collection.generic.{ CanBuildFrom, GenericCompanion }
-import scala.collection.{ GenTraversable, mutable }
+import scala.collection.{ mutable, GenTraversable }
 import scala.{ collection => c }
 
 /**
@@ -19,7 +19,7 @@ package object ccompat {
   import CompatImpl._
 
   implicit def genericCompanionToCBF[A, CC[X] <: GenTraversable[X]](
-    fact: GenericCompanion[CC]): CanBuildFrom[Any, A, CC[A]] =
+      fact: GenericCompanion[CC]): CanBuildFrom[Any, A, CC[A]] =
     simpleCBF(fact.newBuilder[A])
 
   // This really belongs into scala.collection but there's already a package object
@@ -43,8 +43,10 @@ package ccompat {
     def addOne(elem: Elem): this.type = self.+=(elem)
   }
 
-  trait QuerySeqOptimized extends scala.collection.immutable.LinearSeq[(String, String)] with scala.collection.LinearSeqOptimized[(String, String), akka.http.scaladsl.model.Uri.Query] {
+  trait QuerySeqOptimized extends scala.collection.immutable.LinearSeq[(String, String)]
+      with scala.collection.LinearSeqOptimized[(String, String), akka.http.scaladsl.model.Uri.Query] {
     self: akka.http.scaladsl.model.Uri.Query =>
-    override def newBuilder: mutable.Builder[(String, String), akka.http.scaladsl.model.Uri.Query] = akka.http.scaladsl.model.Uri.Query.newBuilder
+    override def newBuilder: mutable.Builder[(String, String), akka.http.scaladsl.model.Uri.Query] =
+      akka.http.scaladsl.model.Uri.Query.newBuilder
   }
 }
diff --git a/akka-http-core/src/main/scala-2.13-/akka/http/scaladsl/util/FastFuture.scala b/akka-http-core/src/main/scala-2.13-/akka/http/scaladsl/util/FastFuture.scala
index dca7f5147..d3317f11b 100644
--- a/akka-http-core/src/main/scala-2.13-/akka/http/scaladsl/util/FastFuture.scala
+++ b/akka-http-core/src/main/scala-2.13-/akka/http/scaladsl/util/FastFuture.scala
@@ -45,21 +45,21 @@ class FastFuture[A](val future: Future[A]) extends AnyVal {
       case FulfilledFuture(a) => strictTransform(a, s)
       case ErrorFuture(e)     => strictTransform(e, f)
       case _ => future.value match {
-        case None =>
-          val p = Promise[B]()
-          future.onComplete {
-            case Success(a) => p completeWith strictTransform(a, s)
-            case Failure(e) => p completeWith strictTransform(e, f)
-          }
-          p.future
-        case Some(Success(a)) => strictTransform(a, s)
-        case Some(Failure(e)) => strictTransform(e, f)
-      }
+          case None =>
+            val p = Promise[B]()
+            future.onComplete {
+              case Success(a) => p.completeWith(strictTransform(a, s))
+              case Failure(e) => p.completeWith(strictTransform(e, f))
+            }
+            p.future
+          case Some(Success(a)) => strictTransform(a, s)
+          case Some(Failure(e)) => strictTransform(e, f)
+        }
     }
   }
 
   def recover[B >: A](pf: PartialFunction[Throwable, B])(implicit ec: ExecutionContext): Future[B] =
-    transformWith(FastFuture.successful, t => if (pf isDefinedAt t) FastFuture.successful(pf(t)) else future)
+    transformWith(FastFuture.successful, t => if (pf.isDefinedAt(t)) FastFuture.successful(pf(t)) else future)
 
   def recoverWith[B >: A](pf: PartialFunction[Throwable, Future[B]])(implicit ec: ExecutionContext): Future[B] =
     transformWith(FastFuture.successful, t => pf.applyOrElse(t, (_: Throwable) => future))
@@ -80,9 +80,11 @@ object FastFuture {
     def isCompleted = true
     def result(atMost: Duration)(implicit permit: CanAwait) = a
     def ready(atMost: Duration)(implicit permit: CanAwait) = this
-    def transform[S](f: scala.util.Try[A] => scala.util.Try[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transform[S](f: scala.util.Try[A] => scala.util.Try[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       FastFuture(f(Success(a)))
-    def transformWith[S](f: scala.util.Try[A] => scala.concurrent.Future[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transformWith[S](f: scala.util.Try[A] => scala.concurrent.Future[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       new FastFuture(this).transformWith(f)
   }
   private case class ErrorFuture(error: Throwable) extends Future[Nothing] {
@@ -91,9 +93,11 @@ object FastFuture {
     def isCompleted = true
     def result(atMost: Duration)(implicit permit: CanAwait) = throw error
     def ready(atMost: Duration)(implicit permit: CanAwait) = this
-    def transform[S](f: scala.util.Try[Nothing] => scala.util.Try[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transform[S](f: scala.util.Try[Nothing] => scala.util.Try[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       FastFuture(f(Failure(error)))
-    def transformWith[S](f: scala.util.Try[Nothing] => scala.concurrent.Future[S])(implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
+    def transformWith[S](f: scala.util.Try[Nothing] => scala.concurrent.Future[S])(
+        implicit executor: scala.concurrent.ExecutionContext): scala.concurrent.Future[S] =
       new FastFuture(this).transformWith(f)
   }
 
@@ -101,20 +105,24 @@ object FastFuture {
     def fast: FastFuture[T] = new FastFuture[T](future)
   }
 
-  def sequence[T, M[_] <: TraversableOnce[_]](in: M[Future[T]])(implicit cbf: CanBuildFrom[M[Future[T]], T, M[T]], executor: ExecutionContext): Future[M[T]] =
+  def sequence[T, M[_] <: TraversableOnce[_]](in: M[Future[T]])(implicit cbf: CanBuildFrom[M[Future[T]], T, M[T]],
+      executor: ExecutionContext): Future[M[T]] =
     in.foldLeft(successful(cbf(in))) {
       (fr, fa) => for (r <- fr.fast; a <- fa.asInstanceOf[Future[T]].fast) yield r += a
     }.fast.map(_.result())
 
-  def fold[T, R](futures: TraversableOnce[Future[T]])(zero: R)(f: (R, T) => R)(implicit executor: ExecutionContext): Future[R] =
+  def fold[T, R](futures: TraversableOnce[Future[T]])(zero: R)(f: (R, T) => R)(
+      implicit executor: ExecutionContext): Future[R] =
     if (futures.isEmpty) successful(zero)
     else sequence(futures).fast.map(_.foldLeft(zero)(f))
 
-  def reduce[T, R >: T](futures: TraversableOnce[Future[T]])(op: (R, T) => R)(implicit executor: ExecutionContext): Future[R] =
+  def reduce[T, R >: T](futures: TraversableOnce[Future[T]])(op: (R, T) => R)(
+      implicit executor: ExecutionContext): Future[R] =
     if (futures.isEmpty) failed(new NoSuchElementException("reduce attempted on empty collection"))
-    else sequence(futures).fast.map(_ reduceLeft op)
+    else sequence(futures).fast.map(_.reduceLeft(op))
 
-  def traverse[A, B, M[_] <: TraversableOnce[_]](in: M[A])(fn: A => Future[B])(implicit cbf: CanBuildFrom[M[A], B, M[B]], executor: ExecutionContext): Future[M[B]] =
+  def traverse[A, B, M[_] <: TraversableOnce[_]](in: M[A])(fn: A => Future[B])(
+      implicit cbf: CanBuildFrom[M[A], B, M[B]], executor: ExecutionContext): Future[M[B]] =
     in.foldLeft(successful(cbf(in))) { (fr, a) =>
       val fb = fn(a.asInstanceOf[A])
       for (r <- fr.fast; b <- fb.fast) yield r += b
diff --git a/akka-http-core/src/main/scala/akka/http/ParsingErrorHandler.scala b/akka-http-core/src/main/scala/akka/http/ParsingErrorHandler.scala
index 30e4d75ea..bb85e4056 100644
--- a/akka-http-core/src/main/scala/akka/http/ParsingErrorHandler.scala
+++ b/akka-http-core/src/main/scala/akka/http/ParsingErrorHandler.scala
@@ -16,9 +16,10 @@ abstract class ParsingErrorHandler {
 object DefaultParsingErrorHandler extends ParsingErrorHandler {
   import akka.http.impl.engine.parsing.logParsingError
 
-  override def handle(status: StatusCode, info: ErrorInfo, log: LoggingAdapter, settings: ServerSettings): HttpResponse = {
+  override def handle(
+      status: StatusCode, info: ErrorInfo, log: LoggingAdapter, settings: ServerSettings): HttpResponse = {
     logParsingError(
-      info withSummaryPrepended s"Illegal request, responding with status '$status'",
+      info.withSummaryPrepended(s"Illegal request, responding with status '$status'"),
       log, settings.parserSettings.errorLoggingVerbosity)
     val msg = if (settings.verboseErrorMessages) info.formatPretty else info.summary
     HttpResponse(status, entity = msg)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/HttpConnectionIdleTimeoutBidi.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/HttpConnectionIdleTimeoutBidi.scala
index e2b3a7ed0..4b56de611 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/HttpConnectionIdleTimeoutBidi.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/HttpConnectionIdleTimeoutBidi.scala
@@ -18,34 +18,36 @@ import scala.util.control.NoStackTrace
 /** INTERNAL API */
 @InternalApi
 private[akka] object HttpConnectionIdleTimeoutBidi {
-  def apply(idleTimeout: Duration, remoteAddress: Option[InetSocketAddress]): BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
+  def apply(idleTimeout: Duration, remoteAddress: Option[InetSocketAddress])
+      : BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
     idleTimeout match {
       case f: FiniteDuration => apply(f, remoteAddress)
       case _                 => BidiFlow.identity
     }
-  def apply(idleTimeout: FiniteDuration, remoteAddress: Option[InetSocketAddress]): BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] = {
+  def apply(idleTimeout: FiniteDuration, remoteAddress: Option[InetSocketAddress])
+      : BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] = {
     val connectionToString = remoteAddress match {
       case Some(addr) => s" on connection to [$addr]"
       case _          => ""
     }
     val ex = new HttpIdleTimeoutException(
       "HTTP idle-timeout encountered" + connectionToString + ", " +
-        "no bytes passed in the last " + idleTimeout + ". " +
-        "This is configurable by akka.http.[server|client].idle-timeout.", idleTimeout)
+      "no bytes passed in the last " + idleTimeout + ". " +
+      "This is configurable by akka.http.[server|client].idle-timeout.", idleTimeout)
 
-    val mapError = Flow[ByteString].mapError({ case t: TimeoutException => ex })
+    val mapError = Flow[ByteString].mapError { case t: TimeoutException => ex }
 
     val toNetTimeout: BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
       BidiFlow.fromFlows(
         mapError,
-        Flow[ByteString]
-      )
+        Flow[ByteString])
     val fromNetTimeout: BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
       toNetTimeout.reversed
 
-    fromNetTimeout atop BidiFlow.bidirectionalIdleTimeout[ByteString, ByteString](idleTimeout) atop toNetTimeout
+    fromNetTimeout.atop(BidiFlow.bidirectionalIdleTimeout[ByteString, ByteString](idleTimeout)).atop(toNetTimeout)
   }
 
 }
 
-class HttpIdleTimeoutException(msg: String, timeout: FiniteDuration) extends TimeoutException(msg: String) with NoStackTrace
+class HttpIdleTimeoutException(msg: String, timeout: FiniteDuration) extends TimeoutException(msg: String)
+    with NoStackTrace
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/HttpsProxyGraphStage.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/HttpsProxyGraphStage.scala
index 8ca37abc7..99a74ee42 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/HttpsProxyGraphStage.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/HttpsProxyGraphStage.scala
@@ -11,7 +11,7 @@ import akka.http.impl.engine.parsing.ParserOutput.{ NeedMoreData, RemainingBytes
 import akka.http.impl.engine.parsing.{ HttpHeaderParser, HttpResponseParser, ParserOutput }
 import akka.http.impl.util.ByteStringRendering
 import akka.http.impl.util.Rendering.CrLf
-import akka.http.scaladsl.model.headers.{ HttpCredentials, `Proxy-Authorization` }
+import akka.http.scaladsl.model.headers.{ `Proxy-Authorization`, HttpCredentials }
 import akka.http.scaladsl.model.{ HttpMethods, StatusCodes }
 import akka.http.scaladsl.settings.ClientConnectionSettings
 import akka.stream.scaladsl.BidiFlow
@@ -32,17 +32,18 @@ private[http] object HttpsProxyGraphStage {
   // State after Proxy responded  back
   case object Connected extends State
 
-  def apply(targetHostName: String, targetPort: Int, settings: ClientConnectionSettings, proxyAuth: Option[HttpCredentials]): BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
+  def apply(targetHostName: String, targetPort: Int, settings: ClientConnectionSettings,
+      proxyAuth: Option[HttpCredentials]): BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
     BidiFlow.fromGraph(new HttpsProxyGraphStage(targetHostName, targetPort, settings, proxyAuth))
 }
 
 /** INTERNAL API */
 @InternalApi
 private final class HttpsProxyGraphStage(
-  targetHostName: String, targetPort: Int,
-  settings:           ClientConnectionSettings,
-  proxyAuthorization: Option[HttpCredentials])
-  extends GraphStage[BidiShape[ByteString, ByteString, ByteString, ByteString]] {
+    targetHostName: String, targetPort: Int,
+    settings: ClientConnectionSettings,
+    proxyAuthorization: Option[HttpCredentials])
+    extends GraphStage[BidiShape[ByteString, ByteString, ByteString, ByteString]] {
 
   import HttpsProxyGraphStage._
 
@@ -52,7 +53,8 @@ private final class HttpsProxyGraphStage(
   val sslIn: Inlet[ByteString] = Inlet("OutgoingSSL.in")
   val sslOut: Outlet[ByteString] = Outlet("OutgoingSSL.out")
 
-  override def shape: BidiShape[ByteString, ByteString, ByteString, ByteString] = BidiShape.apply(sslIn, bytesOut, bytesIn, sslOut)
+  override def shape: BidiShape[ByteString, ByteString, ByteString, ByteString] =
+    BidiShape.apply(sslIn, bytesOut, bytesIn, sslOut)
 
   private val connectMsg = {
     val r = new ByteStringRendering(256)
@@ -66,122 +68,132 @@ private final class HttpsProxyGraphStage(
     r.get
   }
 
-  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with StageLogging {
-    private var state: State = Starting
+  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+    new GraphStageLogic(shape) with StageLogging {
+      private var state: State = Starting
 
-    lazy val parser = {
-      val p = new HttpResponseParser(settings.parserSettings, HttpHeaderParser(settings.parserSettings, log)) {
-        override def handleInformationalResponses = false
+      lazy val parser = {
+        val p = new HttpResponseParser(settings.parserSettings, HttpHeaderParser(settings.parserSettings, log)) {
+          override def handleInformationalResponses = false
 
-        override protected def parseMessage(input: ByteString, offset: Int): StateResult = {
-          // hacky, we want in the first branch *all fragments* of the first response
-          if (offset == 0) {
-            super.parseMessage(input, offset)
-          } else {
-            if (input.size > offset) {
-              emit(RemainingBytes(input.drop(offset)))
+          override protected def parseMessage(input: ByteString, offset: Int): StateResult = {
+            // hacky, we want in the first branch *all fragments* of the first response
+            if (offset == 0) {
+              super.parseMessage(input, offset)
             } else {
-              emit(NeedMoreData)
+              if (input.size > offset) {
+                emit(RemainingBytes(input.drop(offset)))
+              } else {
+                emit(NeedMoreData)
+              }
+              terminate()
             }
-            terminate()
           }
         }
+        p.setContextForNextResponse(HttpResponseParser.ResponseContext(HttpMethods.CONNECT, None))
+        p
       }
-      p.setContextForNextResponse(HttpResponseParser.ResponseContext(HttpMethods.CONNECT, None))
-      p
-    }
 
-    setHandler(sslIn, new InHandler {
-      override def onPush() = {
-        state match {
-          case Starting =>
-            throw new IllegalStateException("inlet OutgoingSSL.in unexpectedly pushed in Starting state")
-          case Connecting =>
-            throw new IllegalStateException("inlet OutgoingSSL.in unexpectedly pushed in Connecting state")
-          case Connected =>
-            push(bytesOut, grab(sslIn))
-        }
-      }
+      setHandler(sslIn,
+        new InHandler {
+          override def onPush() = {
+            state match {
+              case Starting =>
+                throw new IllegalStateException("inlet OutgoingSSL.in unexpectedly pushed in Starting state")
+              case Connecting =>
+                throw new IllegalStateException("inlet OutgoingSSL.in unexpectedly pushed in Connecting state")
+              case Connected =>
+                push(bytesOut, grab(sslIn))
+            }
+          }
 
-      override def onUpstreamFinish(): Unit = {
-        complete(bytesOut)
-      }
+          override def onUpstreamFinish(): Unit = {
+            complete(bytesOut)
+          }
 
-    })
-
-    setHandler(bytesIn, new InHandler {
-      override def onPush() = {
-        state match {
-          case Starting =>
-          // that means that proxy had sent us something even before CONNECT to proxy was sent, therefore we just ignore it
-          case Connecting =>
-            val proxyResponse = grab(bytesIn)
-            parser.parseBytes(proxyResponse) match {
-              case NeedMoreData =>
-                pull(bytesIn)
-              case ResponseStart(_: StatusCodes.Success, _, _, _, _, _) =>
-                var pushed = false
-                val parseResult = parser.onPull()
-                require(parseResult == ParserOutput.MessageEnd, s"parseResult should be MessageEnd but was $parseResult")
-                parser.onPull() match {
-                  // NeedMoreData is what we emit in overridden `parseMessage` in case input.size == offset
+        })
+
+      setHandler(bytesIn,
+        new InHandler {
+          override def onPush() = {
+            state match {
+              case Starting =>
+              // that means that proxy had sent us something even before CONNECT to proxy was sent, therefore we just ignore it
+              case Connecting =>
+                val proxyResponse = grab(bytesIn)
+                parser.parseBytes(proxyResponse) match {
                   case NeedMoreData =>
-                  case RemainingBytes(bytes) =>
-                    push(sslOut, bytes) // parser already read more than expected, forward that data directly
-                    pushed = true
+                    pull(bytesIn)
+                  case ResponseStart(_: StatusCodes.Success, _, _, _, _, _) =>
+                    var pushed = false
+                    val parseResult = parser.onPull()
+                    require(parseResult == ParserOutput.MessageEnd,
+                      s"parseResult should be MessageEnd but was $parseResult")
+                    parser.onPull() match {
+                      // NeedMoreData is what we emit in overridden `parseMessage` in case input.size == offset
+                      case NeedMoreData =>
+                      case RemainingBytes(bytes) =>
+                        push(sslOut, bytes) // parser already read more than expected, forward that data directly
+                        pushed = true
+                      case other =>
+                        throw new IllegalStateException(s"unexpected element of type ${other.getClass}")
+                    }
+                    parser.onUpstreamFinish()
+
+                    log.debug(s"HTTP(S) proxy connection to {}:{} established. Now forwarding data.", targetHostName,
+                      targetPort)
+
+                    state = Connected
+                    if (isAvailable(bytesOut)) pull(sslIn)
+                    if (isAvailable(sslOut)) pull(bytesIn)
+                  case ResponseStart(statusCode, _, _, _, _, _) =>
+                    failStage(new ProxyConnectionFailedException(
+                      s"The HTTP(S) proxy rejected to open a connection to $targetHostName:$targetPort with status code: $statusCode"))
                   case other =>
-                    throw new IllegalStateException(s"unexpected element of type ${other.getClass}")
+                    throw new IllegalStateException(s"unexpected element of type $other")
                 }
-                parser.onUpstreamFinish()
-
-                log.debug(s"HTTP(S) proxy connection to {}:{} established. Now forwarding data.", targetHostName, targetPort)
 
-                state = Connected
-                if (isAvailable(bytesOut)) pull(sslIn)
-                if (isAvailable(sslOut)) pull(bytesIn)
-              case ResponseStart(statusCode, _, _, _, _, _) =>
-                failStage(new ProxyConnectionFailedException(s"The HTTP(S) proxy rejected to open a connection to $targetHostName:$targetPort with status code: $statusCode"))
-              case other =>
-                throw new IllegalStateException(s"unexpected element of type $other")
+              case Connected =>
+                push(sslOut, grab(bytesIn))
             }
+          }
 
-          case Connected =>
-            push(sslOut, grab(bytesIn))
-        }
-      }
-
-      override def onUpstreamFinish(): Unit = complete(sslOut)
-
-    })
-
-    setHandler(bytesOut, new OutHandler {
-      override def onPull() = {
-        state match {
-          case Starting =>
-            log.debug(s"TCP connection to HTTP(S) proxy connection established. Sending CONNECT {}:{} to HTTP(S) proxy", targetHostName, targetPort)
-            push(bytesOut, connectMsg)
-            state = Connecting
-          case Connecting =>
-          // don't need to do anything
-          case Connected =>
-            pull(sslIn)
-        }
-      }
+          override def onUpstreamFinish(): Unit = complete(sslOut)
+
+        })
+
+      setHandler(bytesOut,
+        new OutHandler {
+          override def onPull() = {
+            state match {
+              case Starting =>
+                log.debug(
+                  s"TCP connection to HTTP(S) proxy connection established. Sending CONNECT {}:{} to HTTP(S) proxy",
+                  targetHostName, targetPort)
+                push(bytesOut, connectMsg)
+                state = Connecting
+              case Connecting =>
+              // don't need to do anything
+              case Connected =>
+                pull(sslIn)
+            }
+          }
 
-      override def onDownstreamFinish(): Unit = cancel(sslIn)
+          override def onDownstreamFinish(): Unit = cancel(sslIn)
 
-    })
+        })
 
-    setHandler(sslOut, new OutHandler {
-      override def onPull() = {
-        pull(bytesIn)
-      }
+      setHandler(sslOut,
+        new OutHandler {
+          override def onPull() = {
+            pull(bytesIn)
+          }
 
-      override def onDownstreamFinish(): Unit = cancel(bytesIn)
+          override def onDownstreamFinish(): Unit = cancel(bytesIn)
 
-    })
+        })
 
-  }
+    }
 
 }
 
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/OutgoingConnectionBlueprint.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/OutgoingConnectionBlueprint.scala
index 61b81721f..5ff609731 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/OutgoingConnectionBlueprint.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/OutgoingConnectionBlueprint.scala
@@ -52,20 +52,20 @@ private[http] object OutgoingConnectionBlueprint {
     <------------+----------------|  Parsing   |                                            |
                                   |  Merge     |<------------------------------------------ V
                                   +------------+
-  */
+   */
   def apply(
-    hostHeader: headers.Host,
-    settings:   ClientConnectionSettings,
-    log:        LoggingAdapter): Http.ClientLayer = {
+      hostHeader: headers.Host,
+      settings: ClientConnectionSettings,
+      log: LoggingAdapter): Http.ClientLayer = {
     import settings._
 
     val core = BidiFlow.fromGraph(GraphDSL.create() { implicit b =>
       import GraphDSL.Implicits._
 
       val renderingContextCreation = b.add {
-        Flow[HttpRequest] map { request =>
+        Flow[HttpRequest].map { request =>
           val sendEntityTrigger =
-            request.headers collectFirst { case headers.Expect.`100-continue` => Promise[NotUsed]().future }
+            request.headers.collectFirst { case headers.Expect.`100-continue` => Promise[NotUsed]().future }
           RequestRenderingContext(request, hostHeader, sendEntityTrigger)
         }
       }
@@ -79,7 +79,7 @@ private[http] object OutgoingConnectionBlueprint {
         Flow[RequestRenderingContext].flatMapConcat(requestRendererFactory.renderToSource).named("renderer")
       }
 
-      val bypass = Flow[RequestRenderingContext] map { ctx =>
+      val bypass = Flow[RequestRenderingContext].map { ctx =>
         HttpResponseParser.ResponseContext(ctx.request.method, ctx.sendEntityTrigger.map(_.asInstanceOf[Promise[Unit]]))
       }
 
@@ -96,14 +96,16 @@ private[http] object OutgoingConnectionBlueprint {
 
       val terminationFanout = b.add(Broadcast[HttpResponse](2))
 
-      val logger = b.add(Flow[ByteString].mapError { case t => log.debug(s"Outgoing request stream error {}", t); t }.named("errorLogger"))
+      val logger =
+        b.add(Flow[ByteString].mapError { case t => log.debug(s"Outgoing request stream error {}", t); t }.named(
+          "errorLogger"))
       val wrapTls = b.add(Flow[ByteString].map(SendBytes))
 
       val collectSessionBytes = b.add(Flow[SslTlsInbound].collect { case s: SessionBytes => s })
 
       renderingContextCreation.out ~> bypassFanout.in
-      bypassFanout.out(0) ~> terminationMerge.in0
-      terminationMerge.out ~> requestRendering ~> logger ~> wrapTls
+      bypassFanout.out(0)          ~> terminationMerge.in0
+      terminationMerge.out         ~> requestRendering ~> logger ~> wrapTls
 
       bypassFanout.out(1) ~> bypass ~> responseParsingMerge.in1
       collectSessionBytes ~> responseParsingMerge.in0
@@ -120,15 +122,15 @@ private[http] object OutgoingConnectionBlueprint {
 
     One2OneBidiFlow[HttpRequest, HttpResponse](
       -1,
-      outputTruncationException = new UnexpectedConnectionClosureException(_)
-    ) atop
-      core atop
-      logTLSBidiBySetting("client-plain-text", settings.logUnencryptedNetworkBytes)
+      outputTruncationException = new UnexpectedConnectionClosureException(_)).atop(
+      core).atop(
+      logTLSBidiBySetting("client-plain-text", settings.logUnencryptedNetworkBytes))
   }
 
   // a simple merge stage that simply forwards its first input and ignores its second input
   // (the terminationBackchannelInput), but applies a special completion handling
-  private object TerminationMerge extends GraphStage[FanInShape2[RequestRenderingContext, HttpResponse, RequestRenderingContext]] {
+  private object TerminationMerge
+      extends GraphStage[FanInShape2[RequestRenderingContext, HttpResponse, RequestRenderingContext]] {
     private val requestIn = Inlet[RequestRenderingContext]("TerminationMerge.requestIn")
     private val responseOut = Inlet[HttpResponse]("TerminationMerge.responseOut")
     private val requestContextOut = Outlet[RequestRenderingContext]("TerminationMerge.requestContextOut")
@@ -141,9 +143,10 @@ private[http] object OutgoingConnectionBlueprint {
       passAlong(requestIn, requestContextOut, doFinish = false, doFail = true)
       setHandler(requestContextOut, eagerTerminateOutput)
 
-      setHandler(responseOut, new InHandler {
-        override def onPush(): Unit = pull(responseOut)
-      })
+      setHandler(responseOut,
+        new InHandler {
+          override def onPush(): Unit = pull(responseOut)
+        })
 
       override def preStart(): Unit = {
         pull(requestIn)
@@ -163,115 +166,116 @@ private[http] object OutgoingConnectionBlueprint {
    * of downstream until end of chunks has been reached.
    */
   private[client] final class PrepareResponse(parserSettings: ParserSettings)
-    extends GraphStage[FlowShape[ResponseOutput, HttpResponse]] {
+      extends GraphStage[FlowShape[ResponseOutput, HttpResponse]] {
 
     private val responseOutputIn = Inlet[ResponseOutput]("PrepareResponse.responseOutputIn")
     private val httpResponseOut = Outlet[HttpResponse]("PrepareResponse.httpResponseOut")
 
     val shape = new FlowShape(responseOutputIn, httpResponseOut)
 
-    override def createLogic(effectiveAttributes: Attributes) = new GraphStageLogic(shape) with InHandler with OutHandler {
-      private var entitySource: SubSourceOutlet[ResponseOutput] = _
-      private def entitySubstreamStarted = entitySource ne null
-      private def idle = this
-      private var completionDeferred = false
-      private var completeOnMessageEnd = false
+    override def createLogic(effectiveAttributes: Attributes) =
+      new GraphStageLogic(shape) with InHandler with OutHandler {
+        private var entitySource: SubSourceOutlet[ResponseOutput] = _
+        private def entitySubstreamStarted = entitySource ne null
+        private def idle = this
+        private var completionDeferred = false
+        private var completeOnMessageEnd = false
 
-      def setIdleHandlers(): Unit =
-        if (completeOnMessageEnd || completionDeferred) completeStage()
-        else setHandlers(responseOutputIn, httpResponseOut, idle)
+        def setIdleHandlers(): Unit =
+          if (completeOnMessageEnd || completionDeferred) completeStage()
+          else setHandlers(responseOutputIn, httpResponseOut, idle)
 
-      def onPush(): Unit = grab(responseOutputIn) match {
-        case ResponseStart(statusCode, protocol, attributes, headers, entityCreator, closeRequested) =>
-          val entity = createEntity(entityCreator) withSizeLimit parserSettings.maxContentLength
-          push(httpResponseOut, new HttpResponse(statusCode, headers, attributes, entity, protocol))
-          completeOnMessageEnd = closeRequested
+        def onPush(): Unit = grab(responseOutputIn) match {
+          case ResponseStart(statusCode, protocol, attributes, headers, entityCreator, closeRequested) =>
+            val entity = createEntity(entityCreator).withSizeLimit(parserSettings.maxContentLength)
+            push(httpResponseOut, new HttpResponse(statusCode, headers, attributes, entity, protocol))
+            completeOnMessageEnd = closeRequested
 
-        case MessageStartError(_, info) =>
-          throw IllegalResponseException(info)
+          case MessageStartError(_, info) =>
+            throw IllegalResponseException(info)
 
-        case other =>
-          throw new IllegalStateException(s"ResponseStart expected but $other received.")
-      }
+          case other =>
+            throw new IllegalStateException(s"ResponseStart expected but $other received.")
+        }
 
-      def onPull(): Unit = {
-        if (!entitySubstreamStarted) pull(responseOutputIn)
-      }
+        def onPull(): Unit = {
+          if (!entitySubstreamStarted) pull(responseOutputIn)
+        }
 
-      override def onDownstreamFinish(): Unit = {
-        // if downstream cancels while streaming entity,
-        // make sure we also cancel the entity source, but
-        // after being done with streaming the entity
-        if (entitySubstreamStarted) {
-          completionDeferred = true
-        } else {
-          completeStage()
+        override def onDownstreamFinish(): Unit = {
+          // if downstream cancels while streaming entity,
+          // make sure we also cancel the entity source, but
+          // after being done with streaming the entity
+          if (entitySubstreamStarted) {
+            completionDeferred = true
+          } else {
+            completeStage()
+          }
         }
-      }
 
-      setIdleHandlers()
+        setIdleHandlers()
 
-      // with a strict message there still is a MessageEnd to wait for
-      lazy val waitForMessageEnd = new InHandler with OutHandler {
-        def onPush(): Unit = grab(responseOutputIn) match {
-          case MessageEnd =>
-            if (isAvailable(httpResponseOut)) pull(responseOutputIn)
-            setIdleHandlers()
-          case other => throw new IllegalStateException(s"MessageEnd expected but $other received.")
-        }
+        // with a strict message there still is a MessageEnd to wait for
+        lazy val waitForMessageEnd = new InHandler with OutHandler {
+          def onPush(): Unit = grab(responseOutputIn) match {
+            case MessageEnd =>
+              if (isAvailable(httpResponseOut)) pull(responseOutputIn)
+              setIdleHandlers()
+            case other => throw new IllegalStateException(s"MessageEnd expected but $other received.")
+          }
 
-        override def onPull(): Unit = {
-          // ignore pull as we will anyways pull when we get MessageEnd
+          override def onPull(): Unit = {
+            // ignore pull as we will anyways pull when we get MessageEnd
+          }
         }
-      }
 
-      // with a streamed entity we push the chunks into the substream
-      // until we reach MessageEnd
-      private lazy val substreamHandler = new InHandler with OutHandler {
-        override def onPush(): Unit = grab(responseOutputIn) match {
-          case MessageEnd =>
-            entitySource.complete()
-            entitySource = null
-            // there was a deferred pull from upstream
-            // while we were streaming the entity
-            if (isAvailable(httpResponseOut)) pull(responseOutputIn)
-            setIdleHandlers()
-
-          case messagePart =>
-            entitySource.push(messagePart)
-        }
+        // with a streamed entity we push the chunks into the substream
+        // until we reach MessageEnd
+        private lazy val substreamHandler = new InHandler with OutHandler {
+          override def onPush(): Unit = grab(responseOutputIn) match {
+            case MessageEnd =>
+              entitySource.complete()
+              entitySource = null
+              // there was a deferred pull from upstream
+              // while we were streaming the entity
+              if (isAvailable(httpResponseOut)) pull(responseOutputIn)
+              setIdleHandlers()
+
+            case messagePart =>
+              entitySource.push(messagePart)
+          }
 
-        override def onPull(): Unit = pull(responseOutputIn)
+          override def onPull(): Unit = pull(responseOutputIn)
 
-        override def onUpstreamFinish(): Unit = {
-          entitySource.complete()
-          completeStage()
-        }
+          override def onUpstreamFinish(): Unit = {
+            entitySource.complete()
+            completeStage()
+          }
 
-        override def onUpstreamFailure(reason: Throwable): Unit = {
-          entitySource.fail(reason)
-          failStage(reason)
+          override def onUpstreamFailure(reason: Throwable): Unit = {
+            entitySource.fail(reason)
+            failStage(reason)
+          }
         }
-      }
 
-      private def createEntity(creator: EntityCreator[ResponseOutput, ResponseEntity]): ResponseEntity = {
-        creator match {
-          case StrictEntityCreator(entity) =>
-            // upstream demanded one element, which it just got
-            // but we want MessageEnd as well
-            pull(responseOutputIn)
-            setHandler(responseOutputIn, waitForMessageEnd)
-            setHandler(httpResponseOut, waitForMessageEnd)
-            entity
-
-          case StreamedEntityCreator(creator) =>
-            entitySource = new SubSourceOutlet[ResponseOutput]("EntitySource")
-            entitySource.setHandler(substreamHandler)
-            setHandler(responseOutputIn, substreamHandler)
-            creator(Source.fromGraph(entitySource.source))
+        private def createEntity(creator: EntityCreator[ResponseOutput, ResponseEntity]): ResponseEntity = {
+          creator match {
+            case StrictEntityCreator(entity) =>
+              // upstream demanded one element, which it just got
+              // but we want MessageEnd as well
+              pull(responseOutputIn)
+              setHandler(responseOutputIn, waitForMessageEnd)
+              setHandler(httpResponseOut, waitForMessageEnd)
+              entity
+
+            case StreamedEntityCreator(creator) =>
+              entitySource = new SubSourceOutlet[ResponseOutput]("EntitySource")
+              entitySource.setHandler(substreamHandler)
+              setHandler(responseOutputIn, substreamHandler)
+              creator(Source.fromGraph(entitySource.source))
+          }
         }
       }
-    }
   }
 
   /**
@@ -281,7 +285,7 @@ private[http] object OutgoingConnectionBlueprint {
    * 3. Go back to 1.
    */
   private[client] final class ResponseParsingMerge(rootParser: HttpResponseParser)
-    extends GraphStage[FanInShape2[SessionBytes, BypassData, List[ResponseOutput]]] {
+      extends GraphStage[FanInShape2[SessionBytes, BypassData, List[ResponseOutput]]] {
     private val dataIn = Inlet[SessionBytes]("ResponseParsingMerge.dataIn")
     private val bypassIn = Inlet[BypassData]("ResponseParsingMerge.bypassIn")
     private val responseOut = Outlet[List[ResponseOutput]]("ResponseParsingMerge.responseOut")
@@ -297,34 +301,36 @@ private[http] object OutgoingConnectionBlueprint {
       var waitingForMethod = true
       var completeStagePending = false
 
-      setHandler(bypassIn, new InHandler {
-        override def onPush(): Unit = {
-          val responseContext = grab(bypassIn)
-          parser.setContextForNextResponse(responseContext)
-          val output = parser.parseBytes(ByteString.empty)
-          drainParser(output)
-        }
-        override def onUpstreamFinish(): Unit =
-          if (waitingForMethod) completeStage()
-      })
-
-      setHandler(dataIn, new InHandler {
-        override def onPush(): Unit = {
-          val bytes = grab(dataIn)
-          val output = parser.parseSessionBytes(bytes)
-          drainParser(output)
-        }
-        override def onUpstreamFinish(): Unit =
-          if (waitingForMethod) completeStage()
-          else {
-            if (parser.onUpstreamFinish()) {
-              completeStage()
-            } else {
-              completeStagePending = true
-              emit(responseOut, parser.onPull() :: Nil, () => completeStage())
-            }
+      setHandler(bypassIn,
+        new InHandler {
+          override def onPush(): Unit = {
+            val responseContext = grab(bypassIn)
+            parser.setContextForNextResponse(responseContext)
+            val output = parser.parseBytes(ByteString.empty)
+            drainParser(output)
+          }
+          override def onUpstreamFinish(): Unit =
+            if (waitingForMethod) completeStage()
+        })
+
+      setHandler(dataIn,
+        new InHandler {
+          override def onPush(): Unit = {
+            val bytes = grab(dataIn)
+            val output = parser.parseSessionBytes(bytes)
+            drainParser(output)
           }
-      })
+          override def onUpstreamFinish(): Unit =
+            if (waitingForMethod) completeStage()
+            else {
+              if (parser.onUpstreamFinish()) {
+                completeStage()
+              } else {
+                completeStagePending = true
+                emit(responseOut, parser.onPull() :: Nil, () => completeStage())
+              }
+            }
+        })
 
       setHandler(responseOut, eagerTerminateOutput)
 
@@ -359,5 +365,6 @@ private[http] object OutgoingConnectionBlueprint {
   }
 
   class UnexpectedConnectionClosureException(outstandingResponses: Int)
-    extends RuntimeException(s"The http server closed the connection unexpectedly before delivering responses for $outstandingResponses outstanding requests")
+      extends RuntimeException(
+        s"The http server closed the connection unexpectedly before delivering responses for $outstandingResponses outstanding requests")
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterface.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterface.scala
index e1a3e2fbb..05226090a 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterface.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterface.scala
@@ -38,6 +38,7 @@ import scala.util.{ Failure, Success, Try }
  * The pool interface is a push style interface to a pool of connections against a single host.
  */
 private[http] trait PoolInterface {
+
   /**
    * Submit request to pool. After completion the pool will complete the promise with the response.
    * If the queue in front of the pool is full, the promise will be failed with a BufferOverflowException.
@@ -75,14 +76,16 @@ private[http] object PoolInterface {
 
   private val IdleTimeout = "idle-timeout"
 
-  class PoolInterfaceStage(poolId: PoolId, master: PoolMaster, bufferSize: Int, log: LoggingAdapter) extends GraphStageWithMaterializedValue[FlowShape[ResponseContext, RequestContext], PoolInterface] {
+  class PoolInterfaceStage(poolId: PoolId, master: PoolMaster, bufferSize: Int, log: LoggingAdapter)
+      extends GraphStageWithMaterializedValue[FlowShape[ResponseContext, RequestContext], PoolInterface] {
     private val requestOut = Outlet[RequestContext]("PoolInterface.requestOut")
     private val responseIn = Inlet[ResponseContext]("PoolInterface.responseIn")
     override def shape = FlowShape(responseIn, requestOut)
 
     override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, PoolInterface) =
       throw new IllegalStateException("Should not be called")
-    override def createLogicAndMaterializedValue(inheritedAttributes: Attributes, _materializer: Materializer): (GraphStageLogic, PoolInterface) = {
+    override def createLogicAndMaterializedValue(
+        inheritedAttributes: Attributes, _materializer: Materializer): (GraphStageLogic, PoolInterface) = {
       import _materializer.executionContext
       val logic = new Logic(poolId, shape, master, requestOut, responseIn, bufferSize, log)
       (logic, logic)
@@ -90,13 +93,15 @@ private[http] object PoolInterface {
   }
 
   @InternalStableApi // name `Logic` and annotated methods
-  private class Logic(poolId: PoolId, shape: FlowShape[ResponseContext, RequestContext], master: PoolMaster, requestOut: Outlet[RequestContext], responseIn: Inlet[ResponseContext], bufferSize: Int,
-                      val log: LoggingAdapter)(implicit executionContext: ExecutionContext) extends TimerGraphStageLogic(shape) with PoolInterface with InHandler with OutHandler with LogHelper {
+  private class Logic(poolId: PoolId, shape: FlowShape[ResponseContext, RequestContext], master: PoolMaster,
+      requestOut: Outlet[RequestContext], responseIn: Inlet[ResponseContext], bufferSize: Int,
+      val log: LoggingAdapter)(implicit executionContext: ExecutionContext) extends TimerGraphStageLogic(shape)
+      with PoolInterface with InHandler with OutHandler with LogHelper {
     private[this] val PoolOverflowException = new BufferOverflowException( // stack trace cannot be prevented here because `BufferOverflowException` is final
       s"Exceeded configured max-open-requests value of [${poolId.hcps.setup.settings.maxOpenRequests}]. This means that the request queue of this pool (${poolId.hcps}) " +
-        s"has completely filled up because the pool currently does not process requests fast enough to handle the incoming request load. " +
-        "Please retry the request later. See https://doc.akka.io/docs/akka-http/current/scala/http/client-side/pool-overflow.html for " +
-        "more information.")
+      s"has completely filled up because the pool currently does not process requests fast enough to handle the incoming request load. " +
+      "Please retry the request later. See https://doc.akka.io/docs/akka-http/current/scala/http/client-side/pool-overflow.html for " +
+      "more information.")
 
     val hcps = poolId.hcps
     val idleTimeout = hcps.setup.settings.idleTimeout
@@ -146,7 +151,8 @@ private[http] object PoolInterface {
     override def onPull(): Unit =
       if (!buffer.isEmpty) {
         val ctx = buffer.removeFirst()
-        debug(s"Dispatching request [${ctx.request.debugString}] from buffer to pool. Remaining buffer: ${buffer.size()}/$bufferSize")
+        debug(
+          s"Dispatching request [${ctx.request.debugString}] from buffer to pool. Remaining buffer: ${buffer.size()}/$bufferSize")
         push(requestOut, ctx)
       }
 
@@ -159,8 +165,7 @@ private[http] object PoolInterface {
           onDispatch(
             request
               .withUri(request.uri.toHttpRequestTargetOriginForm)
-              .withDefaultHeaders(hostHeader)
-          )
+              .withDefaultHeaders(hostHeader))
         val retries = if (request.method.isIdempotent) hcps.setup.settings.maxRetries else 0
         remainingRequested += 1
         resetIdleTimer()
@@ -211,7 +216,8 @@ private[http] object PoolInterface {
     // PoolInterface implementations
     override def request(request: HttpRequest, responsePromise: Promise[HttpResponse]): Unit =
       requestCallback.invokeWithFeedback((request, responsePromise)).failed.foreach { _ =>
-        debug("Request was sent to pool which was already closed, retrying through the master to create new pool instance")
+        debug(
+          "Request was sent to pool which was already closed, retrying through the master to create new pool instance")
         responsePromise.tryCompleteWith(master.dispatchRequest(poolId, request)(materializer))
       }
     override def shutdown()(implicit ec: ExecutionContext): Future[ShutdownReason] = {
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala
index 9dea5aa37..acd4fc376 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala
@@ -5,7 +5,16 @@
 package akka.http.impl.engine.client
 
 import akka.Done
-import akka.actor.{ Actor, ActorLogging, ActorRef, DeadLetterSuppression, Deploy, ExtendedActorSystem, NoSerializationVerificationNeeded, Props }
+import akka.actor.{
+  Actor,
+  ActorLogging,
+  ActorRef,
+  DeadLetterSuppression,
+  Deploy,
+  ExtendedActorSystem,
+  NoSerializationVerificationNeeded,
+  Props
+}
 import akka.annotation.InternalApi
 import akka.dispatch.ExecutionContexts
 import akka.http.impl.engine.client.PoolInterface.ShutdownReason
@@ -39,6 +48,7 @@ private[http] class PoolMaster(val ref: ActorRef) {
     ref ! SendRequest(poolId, request, responsePromise, fm)
     responsePromise.future
   }
+
   /**
    * Start the corresponding pool to make it ready to serve requests. If the pool is already started,
    * this does nothing. If it is being shutdown, it will restart as soon as the shutdown operation
@@ -86,7 +96,8 @@ private[http] class PoolMaster(val ref: ActorRef) {
   }
 }
 private[http] object PoolMaster {
-  def apply()(implicit system: ExtendedActorSystem): PoolMaster = new PoolMaster(system.systemActorOf(PoolMasterActor.props, "pool-master"))
+  def apply()(implicit system: ExtendedActorSystem): PoolMaster =
+    new PoolMaster(system.systemActorOf(PoolMasterActor.props, "pool-master"))
 }
 
 /**
@@ -105,7 +116,6 @@ private[http] object PoolMaster {
  * and are marked as being shared. This is the case for example for gateways obtained through
  * [[HttpExt.cachedHostConnectionPool]]. Some other gateways are not shared, such as those obtained through
  * [[HttpExt.newHostConnectionPool]], and will have their dedicated restartable pool.
- *
  */
 @InternalApi
 private[http] final class PoolMasterActor extends Actor with ActorLogging {
@@ -141,7 +151,7 @@ private[http] final class PoolMasterActor extends Actor with ActorLogging {
     // freshly created pools will be ready to serve requests immediately.
     case s @ StartPool(poolId, materializer) =>
       statusById.get(poolId) match {
-        case Some(PoolInterfaceRunning(_)) =>
+        case Some(PoolInterfaceRunning(_))                             =>
         case Some(PoolInterfaceShuttingDown(shutdownCompletedPromise)) =>
           // Pool is being shutdown. When this is done, start the pool again.
           shutdownCompletedPromise.future.onComplete(_ => self ! s)(context.dispatcher)
@@ -170,7 +180,8 @@ private[http] final class PoolMasterActor extends Actor with ActorLogging {
           // to this actor by the pool actor, they will be retried once the shutdown
           // has completed.
           val completed = pool.shutdown()(context.dispatcher)
-          shutdownCompletedPromise.tryCompleteWith(completed.map(_ => Done)(ExecutionContexts.sameThreadExecutionContext))
+          shutdownCompletedPromise.tryCompleteWith(
+            completed.map(_ => Done)(ExecutionContexts.sameThreadExecutionContext))
           statusById += poolId -> PoolInterfaceShuttingDown(shutdownCompletedPromise)
         case Some(PoolInterfaceShuttingDown(formerPromise)) =>
           // Pool is already shutting down, mirror the existing promise.
@@ -194,11 +205,14 @@ private[http] final class PoolMasterActor extends Actor with ActorLogging {
             import PoolInterface.ShutdownReason._
             reason match {
               case Success(IdleTimeout) =>
-                log.debug("connection pool for {} was shut down because of idle timeout", PoolInterface.PoolLogSource.genString(poolId))
+                log.debug("connection pool for {} was shut down because of idle timeout",
+                  PoolInterface.PoolLogSource.genString(poolId))
               case Success(ShutdownRequested) =>
-                log.debug("connection pool for {} has shut down as requested", PoolInterface.PoolLogSource.genString(poolId))
+                log.debug("connection pool for {} has shut down as requested",
+                  PoolInterface.PoolLogSource.genString(poolId))
               case Failure(ex) =>
-                log.error(ex, "connection pool for {} has shut down unexpectedly", PoolInterface.PoolLogSource.genString(poolId))
+                log.error(ex, "connection pool for {} has shut down unexpectedly",
+                  PoolInterface.PoolLogSource.genString(poolId))
             }
 
           case Some(PoolInterfaceShuttingDown(shutdownCompletedPromise)) =>
@@ -231,14 +245,19 @@ private[http] object PoolMasterActor {
   final case class PoolInterfaceShuttingDown(shutdownCompletedPromise: Promise[Done]) extends PoolInterfaceStatus
 
   final case class StartPool(poolId: PoolId, materializer: Materializer) extends NoSerializationVerificationNeeded
-  final case class SendRequest(poolId: PoolId, request: HttpRequest, responsePromise: Promise[HttpResponse], materializer: Materializer)
-    extends NoSerializationVerificationNeeded
-  final case class Shutdown(poolId: PoolId, shutdownCompletedPromise: Promise[Done]) extends NoSerializationVerificationNeeded with DeadLetterSuppression
-  final case class ShutdownAll(shutdownCompletedPromise: Promise[Done]) extends NoSerializationVerificationNeeded with DeadLetterSuppression
+  final case class SendRequest(poolId: PoolId, request: HttpRequest, responsePromise: Promise[HttpResponse],
+      materializer: Materializer)
+      extends NoSerializationVerificationNeeded
+  final case class Shutdown(poolId: PoolId, shutdownCompletedPromise: Promise[Done])
+      extends NoSerializationVerificationNeeded with DeadLetterSuppression
+  final case class ShutdownAll(shutdownCompletedPromise: Promise[Done]) extends NoSerializationVerificationNeeded
+      with DeadLetterSuppression
 
-  final case class HasBeenShutdown(interface: PoolInterface, reason: Try[ShutdownReason]) extends NoSerializationVerificationNeeded with DeadLetterSuppression
+  final case class HasBeenShutdown(interface: PoolInterface, reason: Try[ShutdownReason])
+      extends NoSerializationVerificationNeeded with DeadLetterSuppression
 
-  final case class PoolStatus(poolId: PoolId, statusPromise: Promise[Option[PoolInterfaceStatus]]) extends NoSerializationVerificationNeeded
+  final case class PoolStatus(poolId: PoolId, statusPromise: Promise[Option[PoolInterfaceStatus]])
+      extends NoSerializationVerificationNeeded
   final case class PoolSize(sizePromise: Promise[Int]) extends NoSerializationVerificationNeeded
 
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/NewHostConnectionPool.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/NewHostConnectionPool.scala
index 96c98695f..e42206d00 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/NewHostConnectionPool.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/NewHostConnectionPool.scala
@@ -16,7 +16,7 @@ import akka.http.impl.engine.client.PoolFlow.{ RequestContext, ResponseContext }
 import akka.http.impl.engine.client.pool.SlotState._
 import akka.http.impl.util.{ RichHttpRequest, StageLoggingWithOverride, StreamUtils }
 import akka.http.scaladsl.Http
-import akka.http.scaladsl.model.{ HttpEntity, HttpRequest, HttpResponse, headers }
+import akka.http.scaladsl.model.{ headers, HttpEntity, HttpRequest, HttpResponse }
 import akka.http.scaladsl.settings.ConnectionPoolSettings
 import akka.stream._
 import akka.stream.scaladsl.{ Flow, Keep, Sink, Source }
@@ -49,14 +49,14 @@ import scala.util.{ Failure, Random, Success, Try }
 @InternalApi
 private[client] object NewHostConnectionPool {
   def apply(
-    connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]],
-    settings:       ConnectionPoolSettings, log: LoggingAdapter): Flow[RequestContext, ResponseContext, NotUsed] =
+      connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]],
+      settings: ConnectionPoolSettings, log: LoggingAdapter): Flow[RequestContext, ResponseContext, NotUsed] =
     Flow.fromGraph(new HostConnectionPoolStage(connectionFlow, settings, log))
 
   private final class HostConnectionPoolStage(
-    connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]],
-    _settings:      ConnectionPoolSettings, _log: LoggingAdapter
-  ) extends GraphStage[FlowShape[RequestContext, ResponseContext]] {
+      connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]],
+      _settings: ConnectionPoolSettings, _log: LoggingAdapter)
+      extends GraphStage[FlowShape[RequestContext, ResponseContext]] {
     val requestsIn = Inlet[RequestContext]("HostConnectionPoolStage.requestsIn")
     val responsesOut = Outlet[ResponseContext]("HostConnectionPoolStage.responsesOut")
 
@@ -151,7 +151,8 @@ private[client] object NewHostConnectionPool {
             // don't increase if the embargo level has already changed since the start of the connection attempt
           }
           if (_connectionEmbargo != oldValue) {
-            log.debug(s"Connection attempt failed. Backing off new connection attempts for at least ${_connectionEmbargo}.")
+            log.debug(
+              s"Connection attempt failed. Backing off new connection attempts for at least ${_connectionEmbargo}.")
             slots.foreach(_.onNewConnectionEmbargo(_connectionEmbargo))
           }
         }
@@ -164,8 +165,10 @@ private[client] object NewHostConnectionPool {
         }
         object Event {
           val onPreConnect = event0("onPreConnect", _.onPreConnect(_))
-          val onConnectionAttemptSucceeded = event[Http.OutgoingConnection]("onConnectionAttemptSucceeded", _.onConnectionAttemptSucceeded(_, _))
-          val onConnectionAttemptFailed = event[Throwable]("onConnectionAttemptFailed", _.onConnectionAttemptFailed(_, _))
+          val onConnectionAttemptSucceeded =
+            event[Http.OutgoingConnection]("onConnectionAttemptSucceeded", _.onConnectionAttemptSucceeded(_, _))
+          val onConnectionAttemptFailed =
+            event[Throwable]("onConnectionAttemptFailed", _.onConnectionAttemptFailed(_, _))
 
           val onNewConnectionEmbargo = event[FiniteDuration]("onNewConnectionEmbargo", _.onNewConnectionEmbargo(_, _))
 
@@ -186,8 +189,10 @@ private[client] object NewHostConnectionPool {
 
           val onTimeout = event0("onTimeout", _.onTimeout(_))
 
-          private def event0(name: String, transition: (SlotState, Slot) => SlotState): Event[Unit] = new Event(name, (state, slot, _) => transition(state, slot))
-          private def event[T](name: String, transition: (SlotState, Slot, T) => SlotState): Event[T] = new Event[T](name, transition)
+          private def event0(name: String, transition: (SlotState, Slot) => SlotState): Event[Unit] =
+            new Event(name, (state, slot, _) => transition(state, slot))
+          private def event[T](name: String, transition: (SlotState, Slot, T) => SlotState): Event[T] =
+            new Event[T](name, transition)
         }
 
         protected trait StateHandling {
@@ -283,12 +288,13 @@ private[client] object NewHostConnectionPool {
                     val myTimeoutId = createNewTimeoutId()
                     currentTimeoutId = myTimeoutId
                     currentTimeout =
-                      materializer.scheduleOnce(d, safeRunnable {
-                        if (myTimeoutId == currentTimeoutId) { // timeout may race with state changes, ignore if timeout isn't current any more
-                          debug(s"Slot timeout after $d")
-                          updateState(Event.onTimeout)
-                        }
-                      })
+                      materializer.scheduleOnce(d,
+                        safeRunnable {
+                          if (myTimeoutId == currentTimeoutId) { // timeout may race with state changes, ignore if timeout isn't current any more
+                            debug(s"Slot timeout after $d")
+                            updateState(Event.onTimeout)
+                          }
+                        })
                   case _ => // no timeout set, nothing to do
                 }
 
@@ -329,7 +335,8 @@ private[client] object NewHostConnectionPool {
                   case Unconnected if currentEmbargo != Duration.Zero =>
                     OptionVal.Some(Event.onNewConnectionEmbargo.preApply(currentEmbargo))
                   // numConnectedSlots might be slow for big numbers of connections, so avoid calling if minConnections feature is disabled
-                  case s if !s.isConnected && s.isIdle && settings.minConnections > 0 && numConnectedSlots < settings.minConnections =>
+                  case s
+                      if !s.isConnected && s.isIdle && settings.minConnections > 0 && numConnectedSlots < settings.minConnections =>
                     debug(s"Preconnecting because number of connected slots fell down to $numConnectedSlots")
                     OptionVal.Some(Event.onPreConnect)
                   case _ => OptionVal.None
@@ -368,7 +375,7 @@ private[client] object NewHostConnectionPool {
               else
                 throw new IllegalStateException(
                   "State transition loop exceeded maximum number of loops. The pool will shutdown itself. " +
-                    "That's probably a bug. Please file a bug at https://github.com/akka/akka-http/issues. ")
+                  "That's probably a bug. Please file a bug at https://github.com/akka/akka-http/issues. ")
 
             loop(event, arg, 10)
           }
@@ -388,11 +395,13 @@ private[client] object NewHostConnectionPool {
             () => random.nextLong() % max
           }
           def openConnection(): Unit = {
-            if (connection ne null) throw new IllegalStateException("Cannot open connection when slot still has an open connection")
+            if (connection ne null)
+              throw new IllegalStateException("Cannot open connection when slot still has an open connection")
 
             connection = logic.openConnection(this)
             if (settings.maxConnectionLifetime.isFinite) {
-              disconnectAt = Instant.now().toEpochMilli + settings.maxConnectionLifetime.toMillis + keepAliveDurationFuzziness()
+              disconnectAt =
+                Instant.now().toEpochMilli + settings.maxConnectionLifetime.toMillis + keepAliveDurationFuzziness()
             }
           }
 
@@ -404,7 +413,8 @@ private[client] object NewHostConnectionPool {
           def isCurrentConnection(conn: SlotConnection): Boolean = connection eq conn
           def isConnectionClosed: Boolean = (connection eq null) || connection.isClosed
 
-          def dispatchResponseResult(req: RequestContext, result: Try[HttpResponse]): Unit = logic.dispatchResponseResult(req, result)
+          def dispatchResponseResult(req: RequestContext, result: Try[HttpResponse]): Unit =
+            logic.dispatchResponseResult(req, result)
 
           def willCloseAfter(res: HttpResponse): Boolean = {
             logic.willClose(res) || keepAliveTimeApplies()
@@ -429,10 +439,9 @@ private[client] object NewHostConnectionPool {
           }
 
         final class SlotConnection(
-          _slot:      Slot,
-          requestOut: SubSourceOutlet[HttpRequest],
-          responseIn: SubSinkInlet[HttpResponse]
-        ) extends InHandler with OutHandler { connection =>
+            _slot: Slot,
+            requestOut: SubSourceOutlet[HttpRequest],
+            responseIn: SubSinkInlet[HttpResponse]) extends InHandler with OutHandler { connection =>
           var ongoingResponseEntity: Option[HttpEntity] = None
           var ongoingResponseEntityKillSwitch: Option[KillSwitch] = None
           var connectionEstablished: Boolean = false
@@ -471,7 +480,8 @@ private[client] object NewHostConnectionPool {
 
             responseIn.cancel()
 
-            val exception = failure.getOrElse(new IllegalStateException("Connection was closed while response was still in-flight"))
+            val exception =
+              failure.getOrElse(new IllegalStateException("Connection was closed while response was still in-flight"))
             ongoingResponseEntity.foreach(_.dataBytes.runWith(Sink.cancelled)(subFusingMaterializer))
             ongoingResponseEntityKillSwitch.foreach(_.abort(exception))
           }
@@ -525,8 +535,8 @@ private[client] object NewHostConnectionPool {
                 slot.debug("Connection failed")
                 slot.onConnectionFailed(ex)
               }
-              // otherwise, rely on connection.onComplete to fail below
-              // (connection error is sent through matValue future and through the stream)
+            // otherwise, rely on connection.onComplete to fail below
+            // (connection error is sent through matValue future and through the stream)
             }
 
           def onPull(): Unit = () // emitRequests makes sure not to push too early
@@ -537,7 +547,8 @@ private[client] object NewHostConnectionPool {
               // Let's use StreamTcpException for now.
               // FIXME: after moving to Akka 2.6.x only, we can use cancelation cause propagation which would probably also report
               // a StreamTcpException here
-              slot.onConnectionFailed(new StreamTcpException("Connection was cancelled (caused by a failure of the underlying HTTP connection)"))
+              slot.onConnectionFailed(new StreamTcpException(
+                "Connection was cancelled (caused by a failure of the underlying HTTP connection)"))
               responseIn.cancel()
             }
 
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/SlotState.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/SlotState.scala
index f6af15c47..a163e9880 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/SlotState.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/pool/SlotState.scala
@@ -44,10 +44,13 @@ private[pool] sealed abstract class SlotState extends Product {
 
   def idle(ctx: SlotContext): SlotState = SlotState.Idle(ctx.settings.keepAliveTimeout)
   def onPreConnect(ctx: SlotContext): SlotState = illegalState(ctx, "onPreConnect")
-  def onConnectionAttemptSucceeded(ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState = illegalState(ctx, "onConnectionAttemptSucceeded")
-  def onConnectionAttemptFailed(ctx: SlotContext, cause: Throwable): SlotState = illegalState(ctx, "onConnectionAttemptFailed")
+  def onConnectionAttemptSucceeded(ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState =
+    illegalState(ctx, "onConnectionAttemptSucceeded")
+  def onConnectionAttemptFailed(ctx: SlotContext, cause: Throwable): SlotState =
+    illegalState(ctx, "onConnectionAttemptFailed")
 
-  def onNewConnectionEmbargo(ctx: SlotContext, embargoDuration: FiniteDuration): SlotState = illegalState(ctx, "onNewConnectionEmbargo")
+  def onNewConnectionEmbargo(ctx: SlotContext, embargoDuration: FiniteDuration): SlotState =
+    illegalState(ctx, "onNewConnectionEmbargo")
 
   def onNewRequest(ctx: SlotContext, requestContext: RequestContext): SlotState = illegalState(ctx, "onNewRequest")
 
@@ -66,7 +69,8 @@ private[pool] sealed abstract class SlotState extends Product {
 
   /** Will be called either immediately if the response entity is strict or otherwise later */
   def onResponseEntityCompleted(ctx: SlotContext): SlotState = illegalState(ctx, "onResponseEntityCompleted")
-  def onResponseEntityFailed(ctx: SlotContext, cause: Throwable): SlotState = illegalState(ctx, "onResponseEntityFailed")
+  def onResponseEntityFailed(ctx: SlotContext, cause: Throwable): SlotState =
+    illegalState(ctx, "onResponseEntityFailed")
 
   def onConnectionCompleted(ctx: SlotContext): SlotState = illegalState(ctx, "onConnectionCompleted")
   def onConnectionFailed(ctx: SlotContext, cause: Throwable): SlotState = illegalState(ctx, "onConnectionFailed")
@@ -122,20 +126,25 @@ private[pool] object SlotState {
       super.onShutdown(ctx)
     }
 
-    override def onConnectionAttemptFailed(ctx: SlotContext, cause: Throwable): SlotState = failOngoingRequest(ctx, "connection attempt failed", cause)
+    override def onConnectionAttemptFailed(ctx: SlotContext, cause: Throwable): SlotState =
+      failOngoingRequest(ctx, "connection attempt failed", cause)
 
-    override def onRequestEntityFailed(ctx: SlotContext, cause: Throwable): SlotState = failOngoingRequest(ctx, "request entity stream failed", cause)
+    override def onRequestEntityFailed(ctx: SlotContext, cause: Throwable): SlotState =
+      failOngoingRequest(ctx, "request entity stream failed", cause)
     override def onConnectionCompleted(ctx: SlotContext): SlotState =
       // There's no good reason why the connection stream (i.e. the user-facing client Flow[HttpRequest, HttpResponse])
       // would complete during processing of a request.
       // One reason might be that failures on the TCP layer don't necessarily propagate through the stack as failures
       // because of the notorious cancel/failure propagation which can convert failures into completion.
-      failOngoingRequest(ctx, "connection completed", new IllegalStateException("Connection was shutdown.") with NoStackTrace)
+      failOngoingRequest(ctx, "connection completed",
+        new IllegalStateException("Connection was shutdown.") with NoStackTrace)
 
-    override def onConnectionFailed(ctx: SlotContext, cause: Throwable): SlotState = failOngoingRequest(ctx, "connection failure", cause)
+    override def onConnectionFailed(ctx: SlotContext, cause: Throwable): SlotState =
+      failOngoingRequest(ctx, "connection failure", cause)
 
     private def failOngoingRequest(ctx: SlotContext, signal: String, cause: Throwable): SlotState = {
-      ctx.debug(s"Ongoing request [${ongoingRequest.request.debugString}] is failed because of [$signal]: [${cause.getMessage}]")
+      ctx.debug(
+        s"Ongoing request [${ongoingRequest.request.debugString}] is failed because of [$signal]: [${cause.getMessage}]")
       if (ongoingRequest.canBeRetried) { // push directly because it will be buffered internally
         ctx.dispatchResponseResult(ongoingRequest, Failure(cause))
         if (waitingForEndOfRequestEntity) WaitingForEndOfRequestEntity
@@ -204,7 +213,8 @@ private[pool] object SlotState {
   private[pool] final case class Connecting(ongoingRequest: RequestContext) extends ConnectedState with BusyState {
     val waitingForEndOfRequestEntity = false
 
-    override def onConnectionAttemptSucceeded(ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState = {
+    override def onConnectionAttemptSucceeded(
+        ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState = {
       ctx.debug("Slot connection was established")
       PushingRequestToConnection(ongoingRequest)
     }
@@ -212,7 +222,8 @@ private[pool] object SlotState {
   }
 
   private[pool] case object PreConnecting extends ConnectedState with IdleState {
-    override def onConnectionAttemptSucceeded(ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState = {
+    override def onConnectionAttemptSucceeded(
+        ctx: SlotContext, outgoingConnection: Http.OutgoingConnection): SlotState = {
       ctx.debug("Slot connection was (pre-)established")
       idle(ctx)
     }
@@ -230,8 +241,7 @@ private[pool] object SlotState {
       onConnectionFailure(
         ctx,
         "connection completed",
-        new IllegalStateException("Unexpected connection closure") with NoStackTrace
-      )
+        new IllegalStateException("Unexpected connection closure") with NoStackTrace)
 
     private def onConnectionFailure(ctx: SlotContext, signal: String, cause: Throwable): SlotState = {
       ctx.debug(s"Connection was closed by [$signal] while preconnecting because of [${cause.getMessage}].")
@@ -242,10 +252,12 @@ private[pool] object SlotState {
     override def waitingForEndOfRequestEntity: Boolean = ???
 
     override def onRequestDispatched(ctx: SlotContext): SlotState =
-      if (ongoingRequest.request.entity.isStrict) WaitingForResponse(ongoingRequest, waitingForEndOfRequestEntity = false)
+      if (ongoingRequest.request.entity.isStrict)
+        WaitingForResponse(ongoingRequest, waitingForEndOfRequestEntity = false)
       else WaitingForResponse(ongoingRequest, waitingForEndOfRequestEntity = true)
   }
-  final case class WaitingForResponse(ongoingRequest: RequestContext, waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyState {
+  final case class WaitingForResponse(
+      ongoingRequest: RequestContext, waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyState {
 
     override def onRequestEntityCompleted(ctx: SlotContext): SlotState = {
       require(waitingForEndOfRequestEntity)
@@ -260,9 +272,9 @@ private[pool] object SlotState {
     // connection failures are handled by BusyState implementations
   }
   final case class WaitingForResponseDispatch(
-    ongoingRequest:               RequestContext,
-    result:                       Try[HttpResponse],
-    waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyWithResultAlreadyDetermined {
+      ongoingRequest: RequestContext,
+      result: Try[HttpResponse],
+      waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyWithResultAlreadyDetermined {
 
     override def onRequestEntityCompleted(ctx: SlotContext): SlotState = {
       require(waitingForEndOfRequestEntity)
@@ -274,7 +286,8 @@ private[pool] object SlotState {
       ctx.dispatchResponseResult(ongoingRequest, result)
 
       result match {
-        case Success(res)   => WaitingForResponseEntitySubscription(ongoingRequest, res, ctx.settings.responseEntitySubscriptionTimeout, waitingForEndOfRequestEntity)
+        case Success(res) => WaitingForResponseEntitySubscription(ongoingRequest, res,
+            ctx.settings.responseEntitySubscriptionTimeout, waitingForEndOfRequestEntity)
         case Failure(cause) => Failed(cause)
       }
     }
@@ -282,7 +295,8 @@ private[pool] object SlotState {
 
   private[pool] /* to avoid warnings */ trait BusyWithResultAlreadyDetermined extends ConnectedState with BusyState {
     override def onResponseEntityFailed(ctx: SlotContext, cause: Throwable): SlotState = {
-      ctx.debug(s"Response entity for request [${ongoingRequest.request.debugString}] failed with [${cause.getMessage}]")
+      ctx.debug(
+        s"Response entity for request [${ongoingRequest.request.debugString}] failed with [${cause.getMessage}]")
       // response must have already been dispatched, so don't try to dispatch a response
       Failed(cause)
     }
@@ -294,12 +308,14 @@ private[pool] object SlotState {
   }
 
   final case class WaitingForResponseEntitySubscription(
-    ongoingRequest:  RequestContext,
-    ongoingResponse: HttpResponse, override val stateTimeout: Duration, waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyWithResultAlreadyDetermined {
+      ongoingRequest: RequestContext,
+      ongoingResponse: HttpResponse, override val stateTimeout: Duration, waitingForEndOfRequestEntity: Boolean)
+      extends ConnectedState with BusyWithResultAlreadyDetermined {
 
     override def onRequestEntityCompleted(ctx: SlotContext): SlotState = {
       require(waitingForEndOfRequestEntity)
-      WaitingForResponseEntitySubscription(ongoingRequest, ongoingResponse, stateTimeout, waitingForEndOfRequestEntity = false)
+      WaitingForResponseEntitySubscription(ongoingRequest, ongoingResponse, stateTimeout,
+        waitingForEndOfRequestEntity = false)
     }
 
     override def onResponseEntitySubscribed(ctx: SlotContext): SlotState =
@@ -308,16 +324,16 @@ private[pool] object SlotState {
     override def onTimeout(ctx: SlotContext): SlotState = {
       val msg =
         s"Response entity was not subscribed after $stateTimeout. Make sure to read the response `entity` body or call `entity.discardBytes()` on it -- in case you deal with `HttpResponse`, use the shortcut `response.discardEntityBytes()`. " +
-          s"${ongoingRequest.request.debugString} -> ${ongoingResponse.debugString}"
+        s"${ongoingRequest.request.debugString} -> ${ongoingResponse.debugString}"
       ctx.warning(msg) // FIXME: should still warn here?
       Failed(new TimeoutException(msg))
     }
 
   }
   final case class WaitingForEndOfResponseEntity(
-    ongoingRequest:               RequestContext,
-    ongoingResponse:              HttpResponse,
-    waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyWithResultAlreadyDetermined {
+      ongoingRequest: RequestContext,
+      ongoingResponse: HttpResponse,
+      waitingForEndOfRequestEntity: Boolean) extends ConnectedState with BusyWithResultAlreadyDetermined {
 
     override def onResponseEntityCompleted(ctx: SlotContext): SlotState =
       if (waitingForEndOfRequestEntity)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ByteFlag.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ByteFlag.scala
index c10099621..84c3bdc40 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ByteFlag.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ByteFlag.scala
@@ -16,6 +16,7 @@ private[http] final class ByteFlag(val value: Int) extends AnyVal {
   def ifSet(flag: Boolean): ByteFlag = if (flag) this else ByteFlag.Zero
   override def toString: String = s"ByteFlag(${Integer.toHexString(value)})"
 }
+
 /** INTERNAL API */
 @InternalApi
 private[impl] object ByteFlag {
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameEvent.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameEvent.scala
index 288b67764..b1440debe 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameEvent.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameEvent.scala
@@ -17,6 +17,7 @@ import scala.collection.immutable
 private[http2] sealed trait FrameEvent { self: Product =>
   def frameTypeName: String = productPrefix
 }
+
 /** INTERNAL API */
 @InternalApi
 private[http] object FrameEvent {
@@ -25,13 +26,15 @@ private[http] object FrameEvent {
     def streamId: Int
   }
 
-  final case class GoAwayFrame(lastStreamId: Int, errorCode: ErrorCode, debug: ByteString = ByteString.empty) extends FrameEvent {
+  final case class GoAwayFrame(
+      lastStreamId: Int, errorCode: ErrorCode, debug: ByteString = ByteString.empty) extends FrameEvent {
     override def toString: String = s"GoAwayFrame($lastStreamId,$errorCode,debug:<hidden>)"
   }
   final case class DataFrame(
-    streamId:  Int,
-    endStream: Boolean,
-    payload:   ByteString) extends StreamFrameEvent {
+      streamId: Int,
+      endStream: Boolean,
+      payload: ByteString) extends StreamFrameEvent {
+
     /**
      * The amount of bytes this frame consumes of a window. According to RFC 7540, 6.9.1:
      *
@@ -48,20 +51,20 @@ private[http] object FrameEvent {
    * HeadersFrame will be parsed into a logical ParsedHeadersFrame.
    */
   final case class HeadersFrame(
-    streamId:            Int,
-    endStream:           Boolean,
-    endHeaders:          Boolean,
-    headerBlockFragment: ByteString,
-    priorityInfo:        Option[PriorityFrame]) extends StreamFrameEvent
+      streamId: Int,
+      endStream: Boolean,
+      endHeaders: Boolean,
+      headerBlockFragment: ByteString,
+      priorityInfo: Option[PriorityFrame]) extends StreamFrameEvent
   final case class ContinuationFrame(
-    streamId:   Int,
-    endHeaders: Boolean,
-    payload:    ByteString) extends StreamFrameEvent
+      streamId: Int,
+      endHeaders: Boolean,
+      payload: ByteString) extends StreamFrameEvent
   case class PushPromiseFrame(
-    streamId:            Int,
-    endHeaders:          Boolean,
-    promisedStreamId:    Int,
-    headerBlockFragment: ByteString) extends StreamFrameEvent
+      streamId: Int,
+      endHeaders: Boolean,
+      promisedStreamId: Int,
+      headerBlockFragment: ByteString) extends StreamFrameEvent
 
   final case class RstStreamFrame(streamId: Int, errorCode: ErrorCode) extends StreamFrameEvent
   final case class SettingsFrame(settings: immutable.Seq[Setting]) extends FrameEvent
@@ -71,18 +74,18 @@ private[http] object FrameEvent {
     require(data.size == 8, s"PingFrame payload must be of size 8 but was ${data.size}")
   }
   final case class WindowUpdateFrame(
-    streamId:            Int,
-    windowSizeIncrement: Int) extends StreamFrameEvent
+      streamId: Int,
+      windowSizeIncrement: Int) extends StreamFrameEvent
 
   final case class PriorityFrame(
-    streamId:         Int,
-    exclusiveFlag:    Boolean,
-    streamDependency: Int,
-    weight:           Int) extends StreamFrameEvent
+      streamId: Int,
+      exclusiveFlag: Boolean,
+      streamDependency: Int,
+      weight: Int) extends StreamFrameEvent
 
   final case class Setting(
-    identifier: SettingIdentifier,
-    value:      Int)
+      identifier: SettingIdentifier,
+      value: Int)
 
   object Setting {
     implicit def autoConvertFromTuple(tuple: (SettingIdentifier, Int)): Setting =
@@ -91,20 +94,19 @@ private[http] object FrameEvent {
 
   /** Dummy event for all unknown frames */
   final case class UnknownFrameEvent(
-    tpe:      FrameType,
-    flags:    ByteFlag,
-    streamId: Int,
-    payload:  ByteString) extends StreamFrameEvent
+      tpe: FrameType,
+      flags: ByteFlag,
+      streamId: Int,
+      payload: ByteString) extends StreamFrameEvent
 
   /**
    * Convenience (logical) representation of a parsed HEADERS frame with zero, one or
    * many CONTINUATIONS Frames into a single, decompressed object.
    */
   final case class ParsedHeadersFrame(
-    streamId:      Int,
-    endStream:     Boolean,
-    keyValuePairs: Seq[(String, AnyRef)],
-    priorityInfo:  Option[PriorityFrame]
-  ) extends StreamFrameEvent
+      streamId: Int,
+      endStream: Boolean,
+      keyValuePairs: Seq[(String, AnyRef)],
+      priorityInfo: Option[PriorityFrame]) extends StreamFrameEvent
 
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameLogger.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameLogger.scala
index 4e01ecb3b..6624973e5 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameLogger.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/FrameLogger.scala
@@ -34,11 +34,10 @@ private[http2] object FrameLogger {
 
   def logEvent(frameEvent: FrameEvent): String = {
     case class LogEntry(
-      streamId:       Int,
-      shortFrameType: String,
-      extraInfo:      String,
-      flags:          Option[String]*
-    )
+        streamId: Int,
+        shortFrameType: String,
+        extraInfo: String,
+        flags: Option[String]*)
 
     def flag(value: Boolean, name: String): Option[String] = if (value) Some(name) else None
     def hex(bytes: ByteString): String = {
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2.scala
index 18950e3d3..dc966f189 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2.scala
@@ -4,12 +4,24 @@
 
 package akka.http.impl.engine.http2
 
-import akka.actor.{ ActorSystem, ClassicActorSystemProvider, ExtendedActorSystem, Extension, ExtensionId, ExtensionIdProvider }
+import akka.actor.{
+  ActorSystem,
+  ClassicActorSystemProvider,
+  ExtendedActorSystem,
+  Extension,
+  ExtensionId,
+  ExtensionIdProvider
+}
 import akka.annotation.InternalApi
 import akka.dispatch.ExecutionContexts
 import akka.event.LoggingAdapter
 import akka.http.impl.engine.HttpConnectionIdleTimeoutBidi
-import akka.http.impl.engine.server.{ GracefulTerminatorStage, MasterServerTerminator, ServerTerminator, UpgradeToOtherProtocolResponseHeader }
+import akka.http.impl.engine.server.{
+  GracefulTerminatorStage,
+  MasterServerTerminator,
+  ServerTerminator,
+  UpgradeToOtherProtocolResponseHeader
+}
 import akka.http.impl.util.LogByteStringTools
 import akka.http.scaladsl.Http.OutgoingConnection
 import akka.http.scaladsl.{ ConnectionContext, Http, HttpsConnectionContext }
@@ -42,7 +54,7 @@ import scala.util.{ Failure, Success }
  */
 @InternalApi
 private[http] final class Http2Ext(implicit val system: ActorSystem)
-  extends akka.actor.Extension {
+    extends akka.actor.Extension {
   // FIXME: won't having the same package as top-level break osgi?
 
   import Http2._
@@ -54,11 +66,11 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
 
   // TODO: split up similarly to what `Http` does into `serverLayer`, `bindAndHandle`, etc.
   def bindAndHandleAsync(
-    handler:   HttpRequest => Future[HttpResponse],
-    interface: String, port: Int = DefaultPortForProtocol,
-    connectionContext: ConnectionContext,
-    settings:          ServerSettings    = ServerSettings(system),
-    log:               LoggingAdapter    = system.log)(implicit fm: Materializer): Future[ServerBinding] = {
+      handler: HttpRequest => Future[HttpResponse],
+      interface: String, port: Int = DefaultPortForProtocol,
+      connectionContext: ConnectionContext,
+      settings: ServerSettings = ServerSettings(system),
+      log: LoggingAdapter = system.log)(implicit fm: Materializer): Future[ServerBinding] = {
 
     val httpPlusSwitching: HttpPlusSwitching =
       if (connectionContext.isSecure) httpsWithAlpn(connectionContext.asInstanceOf[HttpsConnectionContext])
@@ -71,7 +83,7 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
 
     val http1: HttpImplementation =
       Flow[HttpRequest].mapAsync(settings.pipeliningLimit)(handleUpgradeRequests(handler, settings, log))
-        .joinMat(GracefulTerminatorStage(system, settings) atop http.serverLayer(settings, log = log))(Keep.right)
+        .joinMat(GracefulTerminatorStage(system, settings).atop(http.serverLayer(settings, log = log)))(Keep.right)
     val http2: HttpImplementation =
       Http2Blueprint.handleWithStreamIdHeader(settings.http2Settings.maxConcurrentStreams)(handler)(system.dispatcher)
         .joinMat(Http2Blueprint.serverStackTls(settings, log, telemetry, Http().dateHeaderRendering))(Keep.right)
@@ -92,7 +104,8 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
                   }(fm.executionContext)
                   future // drop the terminator matValue, we already registered is which is all we need to do here
               }
-              .join(HttpConnectionIdleTimeoutBidi(settings.idleTimeout, Some(incoming.remoteAddress)) join incoming.flow)
+              .join(HttpConnectionIdleTimeoutBidi(settings.idleTimeout, Some(incoming.remoteAddress)).join(
+                incoming.flow))
               .addAttributes(Http.cancellationStrategyAttributeForDelay(settings.streamCancellationDelay))
               .run().recover {
                 // Ignore incoming errors from the connection as they will cancel the binding.
@@ -108,10 +121,10 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
               throw e
           }
       }.mapMaterializedValue {
-        _.map(tcpBinding => ServerBinding(tcpBinding.localAddress)(
-          () => tcpBinding.unbind(),
-          timeout => masterTerminator.terminate(timeout)(fm.executionContext)
-        ))(fm.executionContext)
+        _.map(tcpBinding =>
+          ServerBinding(tcpBinding.localAddress)(
+            () => tcpBinding.unbind(),
+            timeout => masterTerminator.terminate(timeout)(fm.executionContext)))(fm.executionContext)
       }.to(Sink.ignore).run()
   }
 
@@ -125,13 +138,11 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
   }
 
   private def handleUpgradeRequests(
-    handler:  HttpRequest => Future[HttpResponse],
-    settings: ServerSettings,
-    log:      LoggingAdapter
-  ): HttpRequest => Future[HttpResponse] = { req =>
+      handler: HttpRequest => Future[HttpResponse],
+      settings: ServerSettings,
+      log: LoggingAdapter): HttpRequest => Future[HttpResponse] = { req =>
     req.header[Upgrade] match {
-      case Some(upgrade) if upgrade.protocols.exists(_.name equalsIgnoreCase "h2c") =>
-
+      case Some(upgrade) if upgrade.protocols.exists(_.name.equalsIgnoreCase("h2c")) =>
         log.debug("Got h2c upgrade request from HTTP/1.1 to HTTP2")
 
         // https://http2.github.io/http2-spec/#Http2SettingsHeader 3.2.1 HTTP2-Settings Header Field
@@ -151,9 +162,11 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
               Flow[HttpRequest]
                 .watchTermination()(Keep.right)
                 .prepend(injectedRequest)
-                .via(Http2Blueprint.handleWithStreamIdHeader(settings.http2Settings.maxConcurrentStreams)(handler)(system.dispatcher))
+                .via(Http2Blueprint.handleWithStreamIdHeader(settings.http2Settings.maxConcurrentStreams)(handler)(
+                  system.dispatcher))
                 // the settings from the header are injected into the blueprint as initial demuxer settings
-                .joinMat(Http2Blueprint.serverStack(settings, log, settingsFromHeader, true, telemetry, Http().dateHeaderRendering))(Keep.left))
+                .joinMat(Http2Blueprint.serverStack(settings, log, settingsFromHeader, true, telemetry,
+                  Http().dateHeaderRendering))(Keep.left))
 
             Future.successful(
               HttpResponse(
@@ -161,10 +174,7 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
                 immutable.Seq[HttpHeader](
                   ConnectionUpgradeHeader,
                   UpgradeHeader,
-                  UpgradeToOtherProtocolResponseHeader(serverLayer)
-                )
-              )
-            )
+                  UpgradeToOtherProtocolResponseHeader(serverLayer))))
           case immutable.Seq(Failure(e)) =>
             log.warning("Failed to parse http2-settings header in upgrade [{}], continuing with HTTP/1.1", e.getMessage)
             handler(req)
@@ -182,7 +192,8 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
   val ConnectionUpgradeHeader = Connection(List("upgrade"))
   val UpgradeHeader = Upgrade(List(UpgradeProtocol("h2c")))
 
-  def httpsWithAlpn(httpsContext: HttpsConnectionContext)(http1: HttpImplementation, http2: HttpImplementation): Flow[ByteString, ByteString, Future[ServerTerminator]] = {
+  def httpsWithAlpn(httpsContext: HttpsConnectionContext)(
+      http1: HttpImplementation, http2: HttpImplementation): Flow[ByteString, ByteString, Future[ServerTerminator]] = {
     // Mutable cell to transport the chosen protocol from the SSLEngine to
     // the switch stage.
     // Doesn't need to be volatile because there's a happens-before relationship (enforced by memory barriers)
@@ -209,11 +220,13 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
     }
     val tls = TLS(() => createEngine(), _ => Success(()), IgnoreComplete)
 
-    ProtocolSwitch(_ => getChosenProtocol(), http1, http2) join
-      tls
+    ProtocolSwitch(_ => getChosenProtocol(), http1, http2).join(
+      tls)
   }
 
-  def outgoingConnection(host: String, port: Int, connectionContext: HttpsConnectionContext, clientConnectionSettings: ClientConnectionSettings, log: LoggingAdapter): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
+  def outgoingConnection(host: String, port: Int, connectionContext: HttpsConnectionContext,
+      clientConnectionSettings: ClientConnectionSettings, log: LoggingAdapter)
+      : Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
     def createEngine(): SSLEngine = {
       val engine = connectionContext.sslContextData match {
         // TODO FIXME configure hostname verification for this case
@@ -228,22 +241,29 @@ private[http] final class Http2Ext(implicit val system: ActorSystem)
       engine
     }
 
-    val stack = Http2Blueprint.clientStack(clientConnectionSettings, log, telemetry).addAttributes(prepareClientAttributes(host, port)) atop
-      Http2Blueprint.unwrapTls atop
-      LogByteStringTools.logTLSBidiBySetting("client-plain-text", clientConnectionSettings.logUnencryptedNetworkBytes) atop
-      TLS(createEngine _, closing = TLSClosing.eagerClose)
+    val stack = Http2Blueprint.clientStack(clientConnectionSettings, log, telemetry).addAttributes(
+      prepareClientAttributes(host, port)).atop(
+      Http2Blueprint.unwrapTls).atop(
+      LogByteStringTools.logTLSBidiBySetting("client-plain-text",
+        clientConnectionSettings.logUnencryptedNetworkBytes)).atop(
+      TLS(createEngine _, closing = TLSClosing.eagerClose))
 
-    stack.joinMat(clientConnectionSettings.transport.connectTo(host, port, clientConnectionSettings)(system.classicSystem))(Keep.right)
+    stack.joinMat(clientConnectionSettings.transport.connectTo(host, port, clientConnectionSettings)(
+      system.classicSystem))(Keep.right)
       .addAttributes(Http.cancellationStrategyAttributeForDelay(clientConnectionSettings.streamCancellationDelay))
   }
 
-  def outgoingConnectionPriorKnowledge(host: String, port: Int, clientConnectionSettings: ClientConnectionSettings, log: LoggingAdapter): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
-    val stack = Http2Blueprint.clientStack(clientConnectionSettings, log, telemetry).addAttributes(prepareClientAttributes(host, port)) atop
-      Http2Blueprint.unwrapTls atop
-      LogByteStringTools.logTLSBidiBySetting("client-plain-text", clientConnectionSettings.logUnencryptedNetworkBytes) atop
-      TLSPlacebo()
-
-    stack.joinMat(clientConnectionSettings.transport.connectTo(host, port, clientConnectionSettings)(system.classicSystem))(Keep.right)
+  def outgoingConnectionPriorKnowledge(host: String, port: Int, clientConnectionSettings: ClientConnectionSettings,
+      log: LoggingAdapter): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
+    val stack = Http2Blueprint.clientStack(clientConnectionSettings, log, telemetry).addAttributes(
+      prepareClientAttributes(host, port)).atop(
+      Http2Blueprint.unwrapTls).atop(
+      LogByteStringTools.logTLSBidiBySetting("client-plain-text",
+        clientConnectionSettings.logUnencryptedNetworkBytes)).atop(
+      TLSPlacebo())
+
+    stack.joinMat(clientConnectionSettings.transport.connectTo(host, port, clientConnectionSettings)(
+      system.classicSystem))(Keep.right)
       .addAttributes(Http.cancellationStrategyAttributeForDelay(clientConnectionSettings.streamCancellationDelay))
   }
 
@@ -267,10 +287,11 @@ private[http] object Http2 extends ExtensionId[Http2Ext] with ExtensionIdProvide
   def createExtension(system: ExtendedActorSystem): Http2Ext = new Http2Ext()(system)
 
   private[http] type HttpImplementation = Flow[SslTlsInbound, SslTlsOutbound, ServerTerminator]
-  private[http] type HttpPlusSwitching = (HttpImplementation, HttpImplementation) => Flow[ByteString, ByteString, Future[ServerTerminator]]
+  private[http] type HttpPlusSwitching =
+    (HttpImplementation, HttpImplementation) => Flow[ByteString, ByteString, Future[ServerTerminator]]
 
-  private[http] def priorKnowledge(http1: HttpImplementation, http2: HttpImplementation): Flow[ByteString, ByteString, Future[ServerTerminator]] =
+  private[http] def priorKnowledge(
+      http1: HttpImplementation, http2: HttpImplementation): Flow[ByteString, ByteString, Future[ServerTerminator]] =
     TLSPlacebo().reversed.joinMat(
-      ProtocolSwitch.byPreface(http1, http2)
-    )(Keep.right)
+      ProtocolSwitch.byPreface(http1, http2))(Keep.right)
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2AlpnSupport.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2AlpnSupport.scala
index 012650592..2a7949f00 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2AlpnSupport.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2AlpnSupport.scala
@@ -21,32 +21,37 @@ import scala.util.Try
  */
 @InternalApi
 private[http] object Http2AlpnSupport {
-  //ALPN Protocol IDs https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids
+  // ALPN Protocol IDs https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids
   val H2 = "h2"
   val HTTP11 = "http/1.1"
+
   /**
    * Enables server-side Http/2 ALPN support for the given engine.
    */
   def enableForServer(engine: SSLEngine, setChosenProtocol: String => Unit): SSLEngine =
     if (isAlpnSupportedByJDK) Http2JDKAlpnSupport.jdkAlpnSupport(engine, setChosenProtocol)
-    else throw new RuntimeException(s"Need to run on a JVM >= 8u252 for ALPN support needed for HTTP/2. Running on ${sys.props("java.version")}")
+    else throw new RuntimeException(
+      s"Need to run on a JVM >= 8u252 for ALPN support needed for HTTP/2. Running on ${sys.props("java.version")}")
 
   def clientSetApplicationProtocols(engine: SSLEngine, protocols: Array[String]): Unit =
     if (isAlpnSupportedByJDK) Http2JDKAlpnSupport.clientSetApplicationProtocols(engine, protocols)
-    else throw new RuntimeException(s"Need to run on a JVM >= 8u252 for ALPN support needed for HTTP/2. Running on ${sys.props("java.version")}")
+    else throw new RuntimeException(
+      s"Need to run on a JVM >= 8u252 for ALPN support needed for HTTP/2. Running on ${sys.props("java.version")}")
 
   private def isAlpnSupportedByJDK: Boolean =
     // ALPN is supported starting with JDK 9
     JavaVersion.majorVersion >= 9 ||
-      (classOf[SSLEngine].getMethods.exists(_.getName == "setHandshakeApplicationProtocolSelector")
-        && {
-          // This method only exists in the jetty-alpn provided implementation. If it exists an old version of the jetty-alpn-agent is active which is not supported
-          // on JDK>= 8u252. When running on such a JVM, you can either just remove the agent or (if you want to support older JVMs with the same command line),
-          // use jetty-alpn-agent >= 2.0.10
-          val jettyAlpnClassesAvailable = Try(Class.forName("sun.security.ssl.ALPNExtension")).toOption.exists(_.getDeclaredMethods.exists(_.getName == "init"))
-          if (jettyAlpnClassesAvailable) throw new RuntimeException("On JDK >= 8u252 you need to either remove jetty-alpn-agent or use version 2.0.10 (which is a noop)")
-          else true
-        })
+    (classOf[SSLEngine].getMethods.exists(_.getName == "setHandshakeApplicationProtocolSelector")
+    && {
+      // This method only exists in the jetty-alpn provided implementation. If it exists an old version of the jetty-alpn-agent is active which is not supported
+      // on JDK>= 8u252. When running on such a JVM, you can either just remove the agent or (if you want to support older JVMs with the same command line),
+      // use jetty-alpn-agent >= 2.0.10
+      val jettyAlpnClassesAvailable = Try(Class.forName("sun.security.ssl.ALPNExtension")).toOption.exists(
+        _.getDeclaredMethods.exists(_.getName == "init"))
+      if (jettyAlpnClassesAvailable) throw new RuntimeException(
+        "On JDK >= 8u252 you need to either remove jetty-alpn-agent or use version 2.0.10 (which is a noop)")
+      else true
+    })
 }
 
 /**
@@ -61,8 +66,8 @@ private[http] object Http2JDKAlpnSupport {
       val chosen = chooseProtocol(protocols)
       chosen.foreach(setChosenProtocol)
 
-      //returning null here means aborting the handshake
-      //see https://docs.oracle.com/en/java/javase/11/docs/api/java.base/javax/net/ssl/SSLEngine.html#setHandshakeApplicationProtocolSelector(java.util.function.BiFunction)
+      // returning null here means aborting the handshake
+      // see https://docs.oracle.com/en/java/javase/11/docs/api/java.base/javax/net/ssl/SSLEngine.html#setHandshakeApplicationProtocolSelector(java.util.function.BiFunction)
       chosen.orNull
     }
 
@@ -74,7 +79,8 @@ private[http] object Http2JDKAlpnSupport {
     else if (protocols.contains(HTTP11)) Some(HTTP11)
     else None
 
-  def applySessionParameters(engine: SSLEngine, sessionParameters: NegotiateNewSession): Unit = TlsUtils.applySessionParameters(engine, sessionParameters)
+  def applySessionParameters(engine: SSLEngine, sessionParameters: NegotiateNewSession): Unit =
+    TlsUtils.applySessionParameters(engine, sessionParameters)
 
   def clientSetApplicationProtocols(engine: SSLEngine, protocols: Array[String]): Unit = {
     val params = engine.getSSLParameters
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Blueprint.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Blueprint.scala
index f1d2612e5..45355e86f 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Blueprint.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Blueprint.scala
@@ -18,7 +18,13 @@ import akka.http.impl.engine.server.ServerTerminator
 import akka.http.impl.util.LogByteStringTools.logTLSBidiBySetting
 import akka.http.impl.util.StreamUtils
 import akka.http.scaladsl.model._
-import akka.http.scaladsl.settings.{ ClientConnectionSettings, Http2ClientSettings, Http2ServerSettings, ParserSettings, ServerSettings }
+import akka.http.scaladsl.settings.{
+  ClientConnectionSettings,
+  Http2ClientSettings,
+  Http2ServerSettings,
+  ParserSettings,
+  ServerSettings
+}
 import akka.stream.{ BidiShape, Graph, StreamTcpException }
 import akka.stream.TLSProtocol._
 import akka.stream.scaladsl.{ BidiFlow, Flow, Keep, Source }
@@ -34,13 +40,12 @@ import scala.util.control.NonFatal
  */
 @InternalApi
 private[http2] case class Http2SubStream(
-  initialHeaders: ParsedHeadersFrame,
-  // outgoing response trailing headers can either be passed in eagerly via an attribute
-  // or streaming as the LastChunk of a chunked data stream
-  trailingHeaders:       OptionVal[ParsedHeadersFrame],
-  data:                  Either[ByteString, Source[Any /* ByteString | HttpEntity.ChunkStreamPart */ , Any]],
-  correlationAttributes: Map[AttributeKey[_], _]
-) {
+    initialHeaders: ParsedHeadersFrame,
+    // outgoing response trailing headers can either be passed in eagerly via an attribute
+    // or streaming as the LastChunk of a chunked data stream
+    trailingHeaders: OptionVal[ParsedHeadersFrame],
+    data: Either[ByteString, Source[Any /* ByteString | HttpEntity.ChunkStreamPart */, Any]],
+    correlationAttributes: Map[AttributeKey[_], _]) {
   def streamId: Int = initialHeaders.streamId
   def hasEntity: Boolean = !initialHeaders.endStream
 
@@ -74,7 +79,8 @@ private[http2] case class Http2SubStream(
 }
 @InternalApi
 private[http2] object Http2SubStream {
-  def apply(entity: HttpEntity, headers: ParsedHeadersFrame, trailingHeaders: OptionVal[ParsedHeadersFrame], correlationAttributes: Map[AttributeKey[_], _] = Map.empty): Http2SubStream = {
+  def apply(entity: HttpEntity, headers: ParsedHeadersFrame, trailingHeaders: OptionVal[ParsedHeadersFrame],
+      correlationAttributes: Map[AttributeKey[_], _] = Map.empty): Http2SubStream = {
     val data =
       entity match {
         case HttpEntity.Chunked(_, chunks) => Right(chunks)
@@ -89,10 +95,12 @@ private[http2] object Http2SubStream {
 @InternalApi
 private[http] object Http2Blueprint {
 
-  def serverStackTls(settings: ServerSettings, log: LoggingAdapter, telemetry: TelemetrySpi, dateHeaderRendering: DateHeaderRendering): BidiFlow[HttpResponse, SslTlsOutbound, SslTlsInbound, HttpRequest, ServerTerminator] =
-    serverStack(settings, log, telemetry = telemetry, dateHeaderRendering = dateHeaderRendering) atop
-      unwrapTls atop
-      logTLSBidiBySetting("server-plain-text", settings.logUnencryptedNetworkBytes)
+  def serverStackTls(settings: ServerSettings, log: LoggingAdapter, telemetry: TelemetrySpi,
+      dateHeaderRendering: DateHeaderRendering)
+      : BidiFlow[HttpResponse, SslTlsOutbound, SslTlsInbound, HttpRequest, ServerTerminator] =
+    serverStack(settings, log, telemetry = telemetry, dateHeaderRendering = dateHeaderRendering).atop(
+      unwrapTls).atop(
+      logTLSBidiBySetting("server-plain-text", settings.logUnencryptedNetworkBytes))
 
   // format: OFF
   def serverStack(
@@ -116,22 +124,24 @@ private[http] object Http2Blueprint {
   // LogByteStringTools.logToStringBidi("framing") atop // enable for debugging
   // format: ON
 
-  def clientStack(settings: ClientConnectionSettings, log: LoggingAdapter, telemetry: TelemetrySpi): BidiFlow[HttpRequest, ByteString, ByteString, HttpResponse, NotUsed] = {
+  def clientStack(settings: ClientConnectionSettings, log: LoggingAdapter, telemetry: TelemetrySpi)
+      : BidiFlow[HttpRequest, ByteString, ByteString, HttpResponse, NotUsed] = {
     // This is master header parser, every other usage should do .createShallowCopy()
     // HttpHeaderParser is not thread safe and should not be called concurrently,
     // the internal trie, however, has built-in protection and will do copy-on-write
     val masterHttpHeaderParser = HttpHeaderParser(settings.parserSettings, log)
-    telemetry.client atop
-      httpLayerClient(masterHttpHeaderParser, settings, log) atop
-      clientDemux(settings.http2Settings, masterHttpHeaderParser) atop
-      FrameLogger.logFramesIfEnabled(settings.http2Settings.logFrames) atop // enable for debugging
-      hpackCoding(masterHttpHeaderParser, settings.parserSettings) atop
-      framingClient(log) atop
-      errorHandling(log) atop
-      idleTimeoutIfConfigured(settings.idleTimeout)
+    telemetry.client.atop(
+      httpLayerClient(masterHttpHeaderParser, settings, log)).atop(
+      clientDemux(settings.http2Settings, masterHttpHeaderParser)).atop(
+      FrameLogger.logFramesIfEnabled(settings.http2Settings.logFrames)).atop( // enable for debugging
+      hpackCoding(masterHttpHeaderParser, settings.parserSettings)).atop(
+      framingClient(log)).atop(
+      errorHandling(log)).atop(
+      idleTimeoutIfConfigured(settings.idleTimeout))
   }
 
-  def httpLayerClient(masterHttpHeaderParser: HttpHeaderParser, settings: ClientConnectionSettings, log: LoggingAdapter): BidiFlow[HttpRequest, Http2SubStream, Http2SubStream, HttpResponse, NotUsed] =
+  def httpLayerClient(masterHttpHeaderParser: HttpHeaderParser, settings: ClientConnectionSettings, log: LoggingAdapter)
+      : BidiFlow[HttpRequest, Http2SubStream, Http2SubStream, HttpResponse, NotUsed] =
     BidiFlow.fromFlows(
       Flow[HttpRequest].statefulMapConcat { () =>
         val renderer = new RequestRendering(settings, log)
@@ -140,8 +150,7 @@ private[http] object Http2Blueprint {
       StreamUtils.statefulAttrsMap[Http2SubStream, HttpResponse] { attrs =>
         val headerParser = masterHttpHeaderParser.createShallowCopy()
         stream => ResponseParsing.parseResponse(headerParser, settings.parserSettings, attrs)(stream)
-      }
-    )
+      })
 
   def idleTimeoutIfConfigured(timeout: Duration): BidiFlow[ByteString, ByteString, ByteString, ByteString, NotUsed] =
     timeout match {
@@ -154,18 +163,19 @@ private[http] object Http2Blueprint {
       StreamUtils.encodeErrorAndComplete {
         case ex: Http2Compliance.Http2ProtocolException =>
           // protocol errors are most likely provoked by peer, so we don't log them noisily
-          if (log.isDebugEnabled) log.debug(s"HTTP2 connection failed with error [${ex.getMessage}]. Sending ${ex.errorCode} and closing connection.")
+          if (log.isDebugEnabled) log.debug(
+            s"HTTP2 connection failed with error [${ex.getMessage}]. Sending ${ex.errorCode} and closing connection.")
           FrameRenderer.render(GoAwayFrame(0, ex.errorCode))
-        case ex: StreamTcpException => throw ex // TCP connection is probably broken: just forward exception
+        case ex: StreamTcpException       => throw ex // TCP connection is probably broken: just forward exception
         case ex: HttpIdleTimeoutException =>
           // idle timeout stage is propagating this error but since it is already coming back we just propagate without logging
           throw ex
         case NonFatal(ex) =>
-          log.error(s"HTTP2 connection failed with error [${ex.getMessage}]. Sending INTERNAL_ERROR and closing connection.")
+          log.error(
+            s"HTTP2 connection failed with error [${ex.getMessage}]. Sending INTERNAL_ERROR and closing connection.")
           FrameRenderer.render(GoAwayFrame(0, Http2Protocol.ErrorCode.INTERNAL_ERROR))
       },
-      Flow[ByteString]
-    )
+      Flow[ByteString])
 
   def framing(log: LoggingAdapter): BidiFlow[FrameEvent, ByteString, ByteString, FrameEvent, NotUsed] =
     BidiFlow.fromFlows(
@@ -185,24 +195,26 @@ private[http] object Http2Blueprint {
    * TODO: introduce another FrameEvent type that exclude HeadersFrame and ContinuationFrame from
    * reaching the higher-level.
    */
-  def hpackCoding(masterHttpHeaderParser: HttpHeaderParser, parserSettings: ParserSettings): BidiFlow[FrameEvent, FrameEvent, FrameEvent, FrameEvent, NotUsed] =
+  def hpackCoding(masterHttpHeaderParser: HttpHeaderParser, parserSettings: ParserSettings)
+      : BidiFlow[FrameEvent, FrameEvent, FrameEvent, FrameEvent, NotUsed] =
     BidiFlow.fromFlows(
       Flow[FrameEvent].via(HeaderCompression),
-      Flow[FrameEvent].via(new HeaderDecompression(masterHttpHeaderParser, parserSettings))
-    )
+      Flow[FrameEvent].via(new HeaderDecompression(masterHttpHeaderParser, parserSettings)))
 
   /**
    * Creates substreams for every stream and manages stream state machines
    * and handles priorization (TODO: later)
    */
-  def serverDemux(settings: Http2ServerSettings, initialDemuxerSettings: immutable.Seq[Setting], upgraded: Boolean): BidiFlow[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream, ServerTerminator] =
+  def serverDemux(settings: Http2ServerSettings, initialDemuxerSettings: immutable.Seq[Setting], upgraded: Boolean)
+      : BidiFlow[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream, ServerTerminator] =
     BidiFlow.fromGraph(new Http2ServerDemux(settings, initialDemuxerSettings, upgraded))
 
   /**
    * Creates substreams for every stream and manages stream state machines
    * and handles priorization (TODO: later)
    */
-  def clientDemux(settings: Http2ClientSettings, masterHttpHeaderParser: HttpHeaderParser): BidiFlow[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream, ServerTerminator] =
+  def clientDemux(settings: Http2ClientSettings, masterHttpHeaderParser: HttpHeaderParser)
+      : BidiFlow[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream, ServerTerminator] =
     BidiFlow.fromGraph(new Http2ClientDemux(settings, masterHttpHeaderParser))
 
   /**
@@ -213,7 +225,8 @@ private[http] object Http2Blueprint {
    * that must be reproduced in an HttpResponse. This can be done automatically for the `bind`` API but for
    * `bindFlow` the user needs to take of this manually.
    */
-  def httpLayer(settings: ServerSettings, log: LoggingAdapter, dateHeaderRendering: DateHeaderRendering): BidiFlow[HttpResponse, Http2SubStream, Http2SubStream, HttpRequest, NotUsed] = {
+  def httpLayer(settings: ServerSettings, log: LoggingAdapter, dateHeaderRendering: DateHeaderRendering)
+      : BidiFlow[HttpResponse, Http2SubStream, Http2SubStream, HttpRequest, NotUsed] = {
     val parserSettings = settings.parserSettings
     // This is master header parser, every other usage should do .createShallowCopy()
     // HttpHeaderParser is not thread safe and should not be called concurrently,
@@ -231,7 +244,8 @@ private[http] object Http2Blueprint {
    * Returns a flow that handles `parallelism` requests in parallel, automatically keeping track of the
    * Http2StreamIdHeader between request and responses.
    */
-  def handleWithStreamIdHeader(parallelism: Int)(handler: HttpRequest => Future[HttpResponse])(implicit ec: ExecutionContext): Flow[HttpRequest, HttpResponse, NotUsed] =
+  def handleWithStreamIdHeader(parallelism: Int)(handler: HttpRequest => Future[HttpResponse])(
+      implicit ec: ExecutionContext): Flow[HttpRequest, HttpResponse, NotUsed] =
     Flow[HttpRequest]
       .mapAsyncUnordered(parallelism) { req =>
         // The handler itself may do significant work so make sure to schedule it separately. This is especially important for HTTP/2 where it is expected that
@@ -248,7 +262,7 @@ private[http] object Http2Blueprint {
       }
 
   private[http2] def logParsingError(info: ErrorInfo, log: LoggingAdapter,
-                                     setting: ParserSettings.ErrorLoggingVerbosity): Unit =
+      setting: ParserSettings.ErrorLoggingVerbosity): Unit =
     setting match {
       case ParserSettings.ErrorLoggingVerbosity.Off    => // nothing to do
       case ParserSettings.ErrorLoggingVerbosity.Simple => log.warning(info.summary)
@@ -256,12 +270,14 @@ private[http] object Http2Blueprint {
     }
 
   private[http] val unwrapTls: BidiFlow[ByteString, SslTlsOutbound, SslTlsInbound, ByteString, NotUsed] =
-    BidiFlow.fromFlows(Flow[ByteString].map(SendBytes), Flow[SslTlsInbound].collect {
-      case SessionBytes(_, bytes) => bytes
-    })
+    BidiFlow.fromFlows(Flow[ByteString].map(SendBytes),
+      Flow[SslTlsInbound].collect {
+        case SessionBytes(_, bytes) => bytes
+      })
 
   implicit class BidiFlowExt[I1, O1, I2, O2, Mat](bidi: BidiFlow[I1, O1, I2, O2, Mat]) {
-    def atopKeepRight[OO1, II2, Mat2](other: Graph[BidiShape[O1, OO1, II2, I2], Mat2]): BidiFlow[I1, OO1, II2, O2, Mat2] =
+    def atopKeepRight[OO1, II2, Mat2](
+        other: Graph[BidiShape[O1, OO1, II2, I2], Mat2]): BidiFlow[I1, OO1, II2, O2, Mat2] =
       bidi.atopMat(other)(Keep.right)
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Compliance.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Compliance.scala
index c003d45bb..b4104c262 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Compliance.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Compliance.scala
@@ -12,21 +12,24 @@ import akka.http.impl.engine.http2.Http2Protocol.ErrorCode
 private[http2] object Http2Compliance {
 
   final class IllegalHttp2StreamIdException(id: Int, expected: String)
-    extends Http2ProtocolException(s"Illegal HTTP/2 stream id: [$id]. $expected!")
+      extends Http2ProtocolException(s"Illegal HTTP/2 stream id: [$id]. $expected!")
 
-  final class MissingHttpIdHeaderException extends Http2ProtocolException("Expected `Http2StreamIdHeader` header to be present but was missing!")
+  final class MissingHttpIdHeaderException
+      extends Http2ProtocolException("Expected `Http2StreamIdHeader` header to be present but was missing!")
 
   final class IllegalHttp2StreamDependency(id: Int)
-    extends Http2ProtocolException(s"Illegal self dependency of stream for id: [$id]!")
+      extends Http2ProtocolException(s"Illegal self dependency of stream for id: [$id]!")
 
-  final class IllegalPayloadInSettingsAckFrame(size: Int, expected: String) extends IllegalHttp2FrameSize(size, expected)
+  final class IllegalPayloadInSettingsAckFrame(size: Int, expected: String)
+      extends IllegalHttp2FrameSize(size, expected)
 
-  final class IllegalPayloadLengthInSettingsFrame(size: Int, expected: String) extends IllegalHttp2FrameSize(size, expected)
+  final class IllegalPayloadLengthInSettingsFrame(size: Int, expected: String)
+      extends IllegalHttp2FrameSize(size, expected)
 
   final def missingHttpIdHeaderException = throw new MissingHttpIdHeaderException
 
   private[akka] sealed class IllegalHttp2FrameSize(size: Int, expected: String)
-    extends Http2ProtocolException(ErrorCode.FRAME_SIZE_ERROR, s"Illegal HTTP/2 frame size: [$size]. $expected!")
+      extends Http2ProtocolException(ErrorCode.FRAME_SIZE_ERROR, s"Illegal HTTP/2 frame size: [$size]. $expected!")
 
   // require methods use `if` because `require` allocates
 
@@ -34,14 +37,17 @@ private[http2] object Http2Compliance {
   def validateMaxFrameSize(value: Int): Unit = {
     import Http2Protocol.MinFrameSize
     import Http2Protocol.MaxFrameSize
-    if (value < MinFrameSize) throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR, s"MAX_FRAME_SIZE MUST NOT be < than $MinFrameSize, attempted setting to: $value!")
-    if (value > MaxFrameSize) throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR, s"MAX_FRAME_SIZE MUST NOT be > than $MaxFrameSize, attempted setting to: $value!")
+    if (value < MinFrameSize) throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR,
+      s"MAX_FRAME_SIZE MUST NOT be < than $MinFrameSize, attempted setting to: $value!")
+    if (value > MaxFrameSize) throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR,
+      s"MAX_FRAME_SIZE MUST NOT be > than $MaxFrameSize, attempted setting to: $value!")
   }
 
   class Http2ProtocolException(val errorCode: ErrorCode, message: String) extends IllegalStateException(message) {
     def this(message: String) = this(ErrorCode.PROTOCOL_ERROR, message)
   }
-  class Http2ProtocolStreamException(val streamId: Int, val errorCode: ErrorCode, message: String) extends IllegalStateException(message)
+  class Http2ProtocolStreamException(val streamId: Int, val errorCode: ErrorCode, message: String)
+      extends IllegalStateException(message)
 
   final def requireZeroStreamId(id: Int): Unit =
     if (id != 0) throw new IllegalHttp2StreamIdException(id, "MUST BE == 0.")
@@ -51,8 +57,10 @@ private[http2] object Http2Compliance {
 
   final def requirePositiveWindowUpdateIncrement(streamId: Int, increment: Int): Unit =
     if (increment <= 0)
-      if (streamId == 0) throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR, "WINDOW_UPDATE MUST be > 0, was: " + increment) // cause GOAWAY
-      else throw new Http2ProtocolStreamException(streamId, ErrorCode.PROTOCOL_ERROR, "WINDOW_UPDATE MUST be > 0, was: " + increment) // cause RST_STREAM
+      if (streamId == 0)
+        throw new Http2ProtocolException(ErrorCode.PROTOCOL_ERROR, "WINDOW_UPDATE MUST be > 0, was: " + increment) // cause GOAWAY
+      else throw new Http2ProtocolStreamException(streamId, ErrorCode.PROTOCOL_ERROR,
+        "WINDOW_UPDATE MUST be > 0, was: " + increment) // cause RST_STREAM
 
   /** checks if the stream id was client initiated, by checking if the stream id was odd-numbered */
   final def isClientInitiatedStreamId(id: Int): Boolean = id % 2 != 0
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Demux.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Demux.scala
index e8386435d..29eca043d 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Demux.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Demux.scala
@@ -24,7 +24,13 @@ import akka.stream.Inlet
 import akka.stream.Outlet
 import akka.stream.impl.io.ByteStringParser.ParsingException
 import akka.stream.scaladsl.Source
-import akka.stream.stage.{ GraphStageLogic, GraphStageWithMaterializedValue, InHandler, StageLogging, TimerGraphStageLogic }
+import akka.stream.stage.{
+  GraphStageLogic,
+  GraphStageWithMaterializedValue,
+  InHandler,
+  StageLogging,
+  TimerGraphStageLogic
+}
 import akka.util.{ ByteString, OptionVal }
 
 import scala.collection.immutable
@@ -39,14 +45,15 @@ import scala.util.control.NonFatal
  */
 @InternalApi
 private[http2] class Http2ClientDemux(http2Settings: Http2ClientSettings, masterHttpHeaderParser: HttpHeaderParser)
-  extends Http2Demux(http2Settings, initialRemoteSettings = Nil, upgraded = false, isServer = false) {
+    extends Http2Demux(http2Settings, initialRemoteSettings = Nil, upgraded = false, isServer = false) {
 
   def wrapTrailingHeaders(headers: ParsedHeadersFrame): Option[ChunkStreamPart] = {
     val headerParser = masterHttpHeaderParser.createShallowCopy()
-    Some(LastChunk(extension = "", headers.keyValuePairs.map {
-      case (name, value: HttpHeader) => value
-      case (name, value)             => parseHeaderPair(headerParser, name, value.asInstanceOf[String])
-    }.toList))
+    Some(LastChunk(extension = "",
+      headers.keyValuePairs.map {
+        case (name, value: HttpHeader) => value
+        case (name, value)             => parseHeaderPair(headerParser, name, value.asInstanceOf[String])
+      }.toList))
   }
 
   override def completionTimeout: FiniteDuration = http2Settings.completionTimeout
@@ -56,12 +63,14 @@ private[http2] class Http2ClientDemux(http2Settings: Http2ClientSettings, master
  * INTERNAL API
  */
 @InternalApi
-private[http2] class Http2ServerDemux(http2Settings: Http2ServerSettings, initialRemoteSettings: immutable.Seq[Setting], upgraded: Boolean)
-  extends Http2Demux(http2Settings, initialRemoteSettings, upgraded, isServer = true) {
+private[http2] class Http2ServerDemux(http2Settings: Http2ServerSettings, initialRemoteSettings: immutable.Seq[Setting],
+    upgraded: Boolean)
+    extends Http2Demux(http2Settings, initialRemoteSettings, upgraded, isServer = true) {
   // We don't provide access to incoming trailing request headers on the server side
   def wrapTrailingHeaders(headers: ParsedHeadersFrame): Option[ChunkStreamPart] = None
 
-  def completionTimeout: FiniteDuration = throw new IllegalArgumentException("Completion timeout not supported for servers")
+  def completionTimeout: FiniteDuration =
+    throw new IllegalArgumentException("Completion timeout not supported for servers")
 }
 
 /**
@@ -83,8 +92,7 @@ private[http2] object ConfigurablePing {
           else settings.pingInterval.min(settings.pingTimeout)
         new EnabledPingState(
           tickInterval,
-          pingEveryNTickWithoutData = settings.pingInterval.toMillis / tickInterval.toMillis
-        )
+          pingEveryNTickWithoutData = settings.pingInterval.toMillis / tickInterval.toMillis)
       }
     }
   }
@@ -196,8 +204,10 @@ private[http2] object ConfigurablePing {
  *                              on the server end of a connection.
  */
 @InternalApi
-private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, initialRemoteSettings: immutable.Seq[Setting], upgraded: Boolean, isServer: Boolean)
-  extends GraphStageWithMaterializedValue[BidiShape[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream], ServerTerminator] {
+private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings,
+    initialRemoteSettings: immutable.Seq[Setting], upgraded: Boolean, isServer: Boolean)
+    extends GraphStageWithMaterializedValue[BidiShape[Http2SubStream, FrameEvent, FrameEvent, Http2SubStream],
+      ServerTerminator] {
   stage =>
   val frameIn = Inlet[FrameEvent]("Demux.frameIn")
   val frameOut = Outlet[FrameEvent]("Demux.frameOut")
@@ -212,12 +222,14 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
   def completionTimeout: FiniteDuration
 
   override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, ServerTerminator) = {
-    object Logic extends TimerGraphStageLogic(shape) with Http2MultiplexerSupport with Http2StreamHandling with GenericOutletSupport with StageLogging with LogHelper with ServerTerminator {
+    object Logic extends TimerGraphStageLogic(shape) with Http2MultiplexerSupport with Http2StreamHandling
+        with GenericOutletSupport with StageLogging with LogHelper with ServerTerminator {
       logic =>
 
       import Http2Demux.CompletionTimeout
 
-      def wrapTrailingHeaders(headers: ParsedHeadersFrame): Option[HttpEntity.ChunkStreamPart] = stage.wrapTrailingHeaders(headers)
+      def wrapTrailingHeaders(headers: ParsedHeadersFrame): Option[HttpEntity.ChunkStreamPart] =
+        stage.wrapTrailingHeaders(headers)
 
       override def isServer: Boolean = stage.isServer
 
@@ -225,7 +237,8 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
 
       override def isUpgraded: Boolean = upgraded
 
-      override protected def logSource: Class[_] = if (isServer) classOf[Http2ServerDemux] else classOf[Http2ClientDemux]
+      override protected def logSource: Class[_] =
+        if (isServer) classOf[Http2ServerDemux] else classOf[Http2ClientDemux]
 
       // cache debug state at the beginning to avoid that this has to be queried all the time
       override lazy val isDebugEnabled: Boolean = super.isDebugEnabled
@@ -241,7 +254,8 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
       private def triggerTermination(deadline: FiniteDuration): Unit =
         // check if we are already terminating, otherwise start termination
         if (!terminating) {
-          log.debug(s"Termination of this connection was triggered. Sending GOAWAY and waiting for open requests to complete for $CompletionTimeout.")
+          log.debug(
+            s"Termination of this connection was triggered. Sending GOAWAY and waiting for open requests to complete for $CompletionTimeout.")
           terminating = true
           pushGOAWAY(ErrorCode.NO_ERROR, "Voluntary connection close.")
           lastIdBeforeTermination = lastStreamId()
@@ -270,8 +284,7 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
       // enforced immediately even before the acknowledgement is received.
       // Reminder: the receiver of a SETTINGS frame must process them in the order they are received.
       val initialLocalSettings: immutable.Seq[Setting] = immutable.Seq(
-        Setting(SettingIdentifier.SETTINGS_MAX_CONCURRENT_STREAMS, http2Settings.maxConcurrentStreams)
-      ) ++
+        Setting(SettingIdentifier.SETTINGS_MAX_CONCURRENT_STREAMS, http2Settings.maxConcurrentStreams)) ++
         immutable.Seq(Setting(SettingIdentifier.SETTINGS_ENABLE_PUSH, 0)).filter(_ => !isServer) // only on client
 
       override def preStart(): Unit = {
@@ -289,8 +302,7 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
 
         pingState.tickInterval().foreach(interval =>
           // to limit overhead rather than constantly rescheduling a timer and looking at system time we use a constant timer
-          schedulePeriodically(ConfigurablePing.Tick, interval)
-        )
+          schedulePeriodically(ConfigurablePing.Tick, interval))
       }
 
       override def pushGOAWAY(errorCode: ErrorCode, debug: String): Unit = {
@@ -308,7 +320,8 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
 
         allowReadingIncomingFrames = allow
       }
-      def pullFrameIn(): Unit = if (allowReadingIncomingFrames && !hasBeenPulled(frameIn) && !isClosed(frameIn)) pull(frameIn)
+      def pullFrameIn(): Unit =
+        if (allowReadingIncomingFrames && !hasBeenPulled(frameIn) && !isClosed(frameIn)) pull(frameIn)
 
       def tryPullSubStreams(): Unit = {
         if (!hasBeenPulled(substreamIn) && !isClosed(substreamIn)) {
@@ -319,88 +332,92 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
       }
 
       // -----------------------------------------------------------------
-      setHandler(frameIn, new InHandler {
-
-        def onPush(): Unit = {
-          val frame = grab(frameIn)
-          frame match {
-            case _: PingFrame => // handle later
-            case _            => pingState.onDataFrameSeen()
-          }
-          frame match {
-            case WindowUpdateFrame(streamId, increment) if streamId == 0 /* else fall through to StreamFrameEvent */ => multiplexer.updateConnectionLevelWindow(increment)
-            case p: PriorityFrame => multiplexer.updatePriority(p)
-            case s: StreamFrameEvent =>
-              if (!terminating)
-                handleStreamEvent(s)
-              else if (s.streamId <= lastIdBeforeTermination)
-                handleStreamEvent(s)
-              else
-                // make clear that we are not accepting any more data on other streams
-                multiplexer.pushControlFrame(RstStreamFrame(s.streamId, ErrorCode.REFUSED_STREAM))
-
-            case SettingsFrame(settings) =>
-              if (settings.nonEmpty) debug(s"Got ${settings.length} settings!")
-
-              val settingsAppliedOk = applyRemoteSettings(settings)
-              if (settingsAppliedOk) {
-                multiplexer.pushControlFrame(SettingsAckFrame(settings))
-              }
-
-            case SettingsAckFrame(_) =>
-            // Currently, we only expect an ack for the initial settings frame, sent
-            // above in preStart. Since only some settings are supported, and those
-            // settings are non-modifiable and known at construction time, these settings
-            // are enforced from the start of the connection so there's no need to invoke
-            // `enforceSettings(initialLocalSettings)`
-
-            case PingFrame(true, data) =>
-              if (data != ConfigurablePing.Ping.data) {
-                // We only ever push static data, responding with anything else is wrong
-                pushGOAWAY(ErrorCode.PROTOCOL_ERROR, "Ping ack contained unexpected data")
-              } else {
-                pingState.onPingAck()
-              }
-            case PingFrame(false, data) =>
-              multiplexer.pushControlFrame(PingFrame(ack = true, data))
-
-            case e =>
-              debug(s"Got unhandled event $e")
-            // ignore unknown frames
+      setHandler(frameIn,
+        new InHandler {
+
+          def onPush(): Unit = {
+            val frame = grab(frameIn)
+            frame match {
+              case _: PingFrame => // handle later
+              case _            => pingState.onDataFrameSeen()
+            }
+            frame match {
+              case WindowUpdateFrame(streamId, increment)
+                  if streamId == 0 /* else fall through to StreamFrameEvent */ =>
+                multiplexer.updateConnectionLevelWindow(increment)
+              case p: PriorityFrame => multiplexer.updatePriority(p)
+              case s: StreamFrameEvent =>
+                if (!terminating)
+                  handleStreamEvent(s)
+                else if (s.streamId <= lastIdBeforeTermination)
+                  handleStreamEvent(s)
+                else
+                  // make clear that we are not accepting any more data on other streams
+                  multiplexer.pushControlFrame(RstStreamFrame(s.streamId, ErrorCode.REFUSED_STREAM))
+
+              case SettingsFrame(settings) =>
+                if (settings.nonEmpty) debug(s"Got ${settings.length} settings!")
+
+                val settingsAppliedOk = applyRemoteSettings(settings)
+                if (settingsAppliedOk) {
+                  multiplexer.pushControlFrame(SettingsAckFrame(settings))
+                }
+
+              case SettingsAckFrame(_) =>
+              // Currently, we only expect an ack for the initial settings frame, sent
+              // above in preStart. Since only some settings are supported, and those
+              // settings are non-modifiable and known at construction time, these settings
+              // are enforced from the start of the connection so there's no need to invoke
+              // `enforceSettings(initialLocalSettings)`
+
+              case PingFrame(true, data) =>
+                if (data != ConfigurablePing.Ping.data) {
+                  // We only ever push static data, responding with anything else is wrong
+                  pushGOAWAY(ErrorCode.PROTOCOL_ERROR, "Ping ack contained unexpected data")
+                } else {
+                  pingState.onPingAck()
+                }
+              case PingFrame(false, data) =>
+                multiplexer.pushControlFrame(PingFrame(ack = true, data))
+
+              case e =>
+                debug(s"Got unhandled event $e")
+              // ignore unknown frames
+            }
+            pullFrameIn()
           }
-          pullFrameIn()
-        }
 
-        override def onUpstreamFailure(ex: Throwable): Unit = {
-          ex match {
-            // every IllegalHttp2StreamIdException will be a GOAWAY with PROTOCOL_ERROR
-            case e: Http2Compliance.IllegalHttp2StreamIdException =>
-              pushGOAWAY(ErrorCode.PROTOCOL_ERROR, e.getMessage)
+          override def onUpstreamFailure(ex: Throwable): Unit = {
+            ex match {
+              // every IllegalHttp2StreamIdException will be a GOAWAY with PROTOCOL_ERROR
+              case e: Http2Compliance.IllegalHttp2StreamIdException =>
+                pushGOAWAY(ErrorCode.PROTOCOL_ERROR, e.getMessage)
 
-            case e: Http2Compliance.Http2ProtocolException =>
-              pushGOAWAY(e.errorCode, e.getMessage)
+              case e: Http2Compliance.Http2ProtocolException =>
+                pushGOAWAY(e.errorCode, e.getMessage)
 
-            case e: Http2Compliance.Http2ProtocolStreamException =>
-              resetStream(e.streamId, e.errorCode)
+              case e: Http2Compliance.Http2ProtocolStreamException =>
+                resetStream(e.streamId, e.errorCode)
 
-            case e: ParsingException =>
-              e.getCause match {
-                case null  => super.onUpstreamFailure(e) // fail with the raw parsing exception
-                case cause => onUpstreamFailure(cause) // unwrap the cause, which should carry ComplianceException and recurse
-              }
+              case e: ParsingException =>
+                e.getCause match {
+                  case null  => super.onUpstreamFailure(e) // fail with the raw parsing exception
+                  case cause => onUpstreamFailure(cause) // unwrap the cause, which should carry ComplianceException and recurse
+                }
 
-            // handle every unhandled exception
-            case NonFatal(e) =>
-              super.onUpstreamFailure(e)
+              // handle every unhandled exception
+              case NonFatal(e) =>
+                super.onUpstreamFailure(e)
+            }
           }
-        }
-      })
+        })
 
       // -----------------------------------------------------------------
       // FIXME: What if user handler doesn't pull in new substreams? Should we reject them
       //        after a while or buffer only a limited amount?
       val bufferedSubStreamOutput = new BufferedOutlet[Http2SubStream](fromOutlet(substreamOut))
-      override def dispatchSubstream(initialHeaders: ParsedHeadersFrame, data: Either[ByteString, Source[Any, Any]], correlationAttributes: Map[AttributeKey[_], _]): Unit =
+      override def dispatchSubstream(initialHeaders: ParsedHeadersFrame, data: Either[ByteString, Source[Any, Any]],
+          correlationAttributes: Map[AttributeKey[_], _]): Unit =
         bufferedSubStreamOutput.push(Http2SubStream(initialHeaders, OptionVal.None, data, correlationAttributes))
 
       // -----------------------------------------------------------------
@@ -432,22 +449,23 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
       }
 
       // -----------------------------------------------------------------
-      setHandler(substreamIn, new InHandler {
-        def onPush(): Unit = {
-          val sub = grab(substreamIn)
-          handleOutgoingCreated(sub)
-          // Once the incoming stream is handled, we decide if we need to pull more.
-          tryPullSubStreams()
-        }
-
-        override def onUpstreamFinish(): Unit =
-          if (isServer) // on the server side conservatively shut everything down if user handler completes prematurely
-            super.onUpstreamFinish()
-          else { // on the client side allow ongoing responses to be delivered for a while even if requests are done
-            completeIfDone()
-            scheduleOnce(CompletionTimeout, completionTimeout)
+      setHandler(substreamIn,
+        new InHandler {
+          def onPush(): Unit = {
+            val sub = grab(substreamIn)
+            handleOutgoingCreated(sub)
+            // Once the incoming stream is handled, we decide if we need to pull more.
+            tryPullSubStreams()
           }
-      })
+
+          override def onUpstreamFinish(): Unit =
+            if (isServer) // on the server side conservatively shut everything down if user handler completes prematurely
+              super.onUpstreamFinish()
+            else { // on the client side allow ongoing responses to be delivered for a while even if requests are done
+              completeIfDone()
+              scheduleOnce(CompletionTimeout, completionTimeout)
+            }
+        })
 
       /**
        * Tune this peer to the remote Settings.
@@ -497,7 +515,8 @@ private[http2] abstract class Http2Demux(http2Settings: Http2CommonSettings, ini
             pingState.clear()
           }
         case CompletionTimeout =>
-          info("Timeout: Peer didn't finish in-flight requests. Closing pending HTTP/2 streams. Increase this timeout via the 'completion-timeout' setting.")
+          info(
+            "Timeout: Peer didn't finish in-flight requests. Closing pending HTTP/2 streams. Increase this timeout via the 'completion-timeout' setting.")
 
           shutdownStreamHandling()
           completeStage()
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Multiplexer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Multiplexer.scala
index f65fdce40..05d5004d8 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Multiplexer.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Multiplexer.scala
@@ -136,7 +136,8 @@ private[http2] trait Http2MultiplexerSupport { logic: GraphStageLogic with Stage
       private val controlFrameBuffer: mutable.Queue[FrameEvent] = new mutable.Queue[FrameEvent]
       private val sendableOutstreams: mutable.Queue[Int] = new mutable.Queue[Int]
       private def enqueueStream(streamId: Int): Unit = {
-        if (isDebugEnabled) require(!sendableOutstreams.contains(streamId), s"Stream [$streamId] was enqueued multiple times.") // requires expensive scanning -> avoid in production
+        if (isDebugEnabled)
+          require(!sendableOutstreams.contains(streamId), s"Stream [$streamId] was enqueued multiple times.") // requires expensive scanning -> avoid in production
         sendableOutstreams.enqueue(streamId)
       }
       private def dequeueStream(streamId: Int): Unit =
@@ -293,7 +294,8 @@ private[http2] trait Http2MultiplexerSupport { logic: GraphStageLogic with Stage
 
       /** Pulled and data is pending but no connection-level window available */
       case object WaitingForConnectionWindow extends WithSendableOutStreams {
-        def onPull(): MultiplexerState = throw new IllegalStateException(s"pull unexpected while waiting for connection window")
+        def onPull(): MultiplexerState =
+          throw new IllegalStateException(s"pull unexpected while waiting for connection window")
         def pushControlFrame(frame: FrameEvent): MultiplexerState = {
           pushFrameOut(frame)
           WaitingForNetworkToSendData
@@ -323,7 +325,7 @@ private[http2] trait Http2MultiplexerSupport { logic: GraphStageLogic with Stage
       s"Changing state from $oldState to $newState"
     }
 
-    /** Logs DEBUG level timing data for the output side of the multiplexer*/
+    /** Logs DEBUG level timing data for the output side of the multiplexer */
     def reportTimings(): Unit = debug {
       val timingsReport = timings.toSeq.sortBy(_._1).map {
         case (name, nanos) => f"${nanos / 1000000}%5d ms $name"
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Protocol.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Protocol.scala
index b5193a7c3..933c4c2e7 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Protocol.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2Protocol.scala
@@ -251,6 +251,7 @@ private[http] object Http2Protocol {
 
   sealed abstract class ErrorCode(val id: Int) extends Product
   object ErrorCode {
+
     /**
      * NO_ERROR (0x0):  The associated condition is not a result of an
      *    error.  For example, a GOAWAY might include this code to indicate
@@ -319,25 +320,25 @@ private[http] object Http2Protocol {
      * CONNECT_ERROR (0xa):  The connection established in response to a
      *    CONNECT request (Section 8.3) was reset or abnormally closed.
      */
-    case object CONNECT_ERROR extends ErrorCode(0xa)
+    case object CONNECT_ERROR extends ErrorCode(0xA)
 
     /**
      * ENHANCE_YOUR_CALM (0xb):  The endpoint detected that its peer is
      *    exhibiting a behavior that might be generating excessive load.
      */
-    case object ENHANCE_YOUR_CALM extends ErrorCode(0xb)
+    case object ENHANCE_YOUR_CALM extends ErrorCode(0xB)
 
     /**
      * INADEQUATE_SECURITY (0xc):  The underlying transport has properties
      *    that do not meet minimum security requirements (see Section 9.2).
      */
-    case object INADEQUATE_SECURITY extends ErrorCode(0xc)
+    case object INADEQUATE_SECURITY extends ErrorCode(0xC)
 
     /**
      * HTTP_1_1_REQUIRED (0xd):  The endpoint requires that HTTP/1.1 be used
      *    instead of HTTP/2.
      */
-    case object HTTP_1_1_REQUIRED extends ErrorCode(0xd)
+    case object HTTP_1_1_REQUIRED extends ErrorCode(0xD)
 
     case class Unknown private (override val id: Int) extends ErrorCode(id)
 
@@ -356,8 +357,7 @@ private[http] object Http2Protocol {
         CONNECT_ERROR,
         ENHANCE_YOUR_CALM,
         INADEQUATE_SECURITY,
-        HTTP_1_1_REQUIRED
-      ).toSeq
+        HTTP_1_1_REQUIRED).toSeq
 
     // make sure that lookup works and `All` ordering isn't broken
     All.foreach(f => require(f == byId(f.id), s"ErrorCode $f with id ${f.id} must be found"))
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2StreamHandling.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2StreamHandling.scala
index 27991cb05..7bb5ccaa9 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2StreamHandling.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/Http2StreamHandling.scala
@@ -35,7 +35,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
   def multiplexer: Http2Multiplexer
   def settings: Http2CommonSettings
   def pushGOAWAY(errorCode: ErrorCode, debug: String): Unit
-  def dispatchSubstream(initialHeaders: ParsedHeadersFrame, data: Either[ByteString, Source[Any, Any]], correlationAttributes: Map[AttributeKey[_], _]): Unit
+  def dispatchSubstream(initialHeaders: ParsedHeadersFrame, data: Either[ByteString, Source[Any, Any]],
+      correlationAttributes: Map[AttributeKey[_], _]): Unit
   def isUpgraded: Boolean
 
   def wrapTrailingHeaders(headers: ParsedHeadersFrame): Option[HttpEntity.ChunkStreamPart]
@@ -56,6 +57,7 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
   private var largestIncomingStreamId = 0
   private var outstandingConnectionLevelWindow = Http2Protocol.InitialWindowSize
   private var totalBufferedData = 0
+
   /**
    * The "last peer-initiated stream that was or might be processed on the sending endpoint in this connection"
    *
@@ -65,6 +67,7 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
 
   private var maxConcurrentStreams = Http2Protocol.InitialMaxConcurrentStreams
   def setMaxConcurrentStreams(newValue: Int): Unit = maxConcurrentStreams = newValue
+
   /**
    * @return true if the number of outgoing Active streams (Active includes Open
    *         and any variant of HalfClosedXxx) doesn't exceed MaxConcurrentStreams
@@ -110,10 +113,12 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       multiplexer.pushControlFrame(stream.initialHeaders)
 
       if (stream.initialHeaders.endStream) {
-        updateState(stream.streamId, _.handleOutgoingCreatedAndFinished(stream.correlationAttributes), "handleOutgoingCreatedAndFinished")
+        updateState(stream.streamId, _.handleOutgoingCreatedAndFinished(stream.correlationAttributes),
+          "handleOutgoingCreatedAndFinished")
       } else {
         val outStream = OutStream(stream)
-        updateState(stream.streamId, _.handleOutgoingCreated(outStream, stream.correlationAttributes), "handleOutgoingCreated")
+        updateState(stream.streamId, _.handleOutgoingCreated(outStream, stream.correlationAttributes),
+          "handleOutgoingCreated")
       }
     } else
       // stream was cancelled by peer before our response was ready
@@ -131,9 +136,9 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
   /** Called by multiplexer to distribute changes from INITIAL_WINDOW_SIZE to all streams */
   def distributeWindowDeltaToAllStreams(delta: Int): Unit =
     updateAllStates({
-      case s: Sending => s.increaseWindow(delta)
-      case x          => x
-    }, "distributeWindowDeltaToAllStreams")
+        case s: Sending => s.increaseWindow(delta)
+        case x          => x
+      }, "distributeWindowDeltaToAllStreams")
 
   /** Called by the multiplexer if ready to send a data frame */
   def pullNextFrame(streamId: Int, maxSize: Int): PullFrameResult =
@@ -146,7 +151,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
   private def updateAllStates(handle: StreamState => StreamState, event: String, eventArg: AnyRef = null): Unit =
     streamStates.keys.foreach(streamId => updateState(streamId.toInt, handle, event, eventArg))
 
-  private def updateState(streamId: Int, handle: StreamState => StreamState, event: String, eventArg: AnyRef = null): Unit =
+  private def updateState(
+      streamId: Int, handle: StreamState => StreamState, event: String, eventArg: AnyRef = null): Unit =
     updateStateAndReturn(streamId, x => (handle(x), ()), event, eventArg)
 
   // Calling multiplexer.enqueueOutStream directly out of the state machine is not allowed, because it might try to
@@ -160,7 +166,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
     multiplexer.enqueueOutStream(streamId)
 
   private var stateMachineRunning = false
-  private def updateStateAndReturn[R](streamId: Int, handle: StreamState => (StreamState, R), event: String, eventArg: AnyRef = null): R = {
+  private def updateStateAndReturn[R](streamId: Int, handle: StreamState => (StreamState, R), event: String,
+      eventArg: AnyRef = null): R = {
     require(!stateMachineRunning, "State machine already running")
     stateMachineRunning = true
 
@@ -175,7 +182,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       case newState => streamStates.put(streamId, newState)
     }
 
-    debug(s"Incoming side of stream [$streamId] changed state: ${oldState.stateName} -> ${newState.stateName} after handling [$event${if (eventArg ne null) s"($eventArg)" else ""}]")
+    debug(
+      s"Incoming side of stream [$streamId] changed state: ${oldState.stateName} -> ${newState.stateName} after handling [$event${if (eventArg ne null) s"($eventArg)" else ""}]")
 
     stateMachineRunning = false
     if (deferredStreamToEnqueue != -1) {
@@ -242,34 +250,40 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
 
     /** Called when we receive a user-created stream (that is open for more data) */
     def handleOutgoingCreated(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
-      warning(s"handleOutgoingCreated received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
+      warning(
+        s"handleOutgoingCreated received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
       this
     }
+
     /** Called when we receive a user-created stream that is already closed */
     def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
-      warning(s"handleOutgoingCreatedAndFinished received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
+      warning(
+        s"handleOutgoingCreatedAndFinished received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
       this
     }
     def handleOutgoingEnded(): StreamState = {
-      warning(s"handleOutgoingEnded received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
+      warning(
+        s"handleOutgoingEnded received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
       this
     }
     def handleOutgoingFailed(cause: Throwable): StreamState = {
-      warning(s"handleOutgoingFailed received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
+      warning(
+        s"handleOutgoingFailed received unexpectedly in state $stateName. This indicates a bug in Akka HTTP, please report it to the issue tracker.")
       this
     }
     def receivedUnexpectedFrame(e: StreamFrameEvent): StreamState = {
       debug(s"Received unexpected frame of type ${e.frameTypeName} for stream ${e.streamId} in state $stateName")
-      pushGOAWAY(ErrorCode.PROTOCOL_ERROR, s"Received unexpected frame of type ${e.frameTypeName} for stream ${e.streamId} in state $stateName")
+      pushGOAWAY(ErrorCode.PROTOCOL_ERROR,
+        s"Received unexpected frame of type ${e.frameTypeName} for stream ${e.streamId} in state $stateName")
       shutdown()
       Closed
     }
 
     protected def expectIncomingStream(
-      event:                 StreamFrameEvent,
-      nextStateEmpty:        StreamState,
-      nextStateStream:       IncomingStreamBuffer => StreamState,
-      correlationAttributes: Map[AttributeKey[_], _]             = Map.empty): StreamState =
+        event: StreamFrameEvent,
+        nextStateEmpty: StreamState,
+        nextStateStream: IncomingStreamBuffer => StreamState,
+        correlationAttributes: Map[AttributeKey[_], _] = Map.empty): StreamState =
       event match {
         case frame @ ParsedHeadersFrame(streamId, endStream, _, _) =>
           if (endStream) {
@@ -284,11 +298,11 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       }
 
     protected def dispatchStream(
-      streamId:              Int,
-      headers:               ParsedHeadersFrame,
-      initialData:           ByteString,
-      correlationAttributes: Map[AttributeKey[_], _],
-      nextStateStream:       IncomingStreamBuffer => StreamState): StreamState = {
+        streamId: Int,
+        headers: ParsedHeadersFrame,
+        initialData: ByteString,
+        correlationAttributes: Map[AttributeKey[_], _],
+        nextStateStream: IncomingStreamBuffer => StreamState): StreamState = {
       val subSource = new SubSourceOutlet[Any](s"substream-out-$streamId")
       val buffer = new IncomingStreamBuffer(streamId, subSource)
       if (initialData.nonEmpty) buffer.onDataFrame(DataFrame(streamId, endStream = false, initialData)) // fabricate frame
@@ -296,8 +310,10 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       nextStateStream(buffer)
     }
 
-    def pullNextFrame(maxSize: Int): (StreamState, PullFrameResult) = throw new IllegalStateException(s"pullNextFrame not supported in state $stateName")
-    def incomingStreamPulled(): StreamState = throw new IllegalStateException(s"incomingStreamPulled not supported in state $stateName")
+    def pullNextFrame(maxSize: Int): (StreamState, PullFrameResult) =
+      throw new IllegalStateException(s"pullNextFrame not supported in state $stateName")
+    def incomingStreamPulled(): StreamState =
+      throw new IllegalStateException(s"incomingStreamPulled not supported in state $stateName")
 
     /** Called to cleanup any state when the connection is torn down */
     def shutdown(): Unit = ()
@@ -313,15 +329,19 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       } else
         expectIncomingStream(event, HalfClosedRemoteWaitingForOutgoingStream(0), OpenReceivingDataFirst(_, 0))
 
-    override def handleOutgoingCreated(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = OpenSendingData(outStream, correlationAttributes)
-    override def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState = HalfClosedLocalWaitingForPeerStream(correlationAttributes)
+    override def handleOutgoingCreated(
+        outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState =
+      OpenSendingData(outStream, correlationAttributes)
+    override def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState =
+      HalfClosedLocalWaitingForPeerStream(correlationAttributes)
   }
+
   /** Special state that allows collecting some incoming data before dispatching it either as strict or streamed entity */
   case class CollectingIncomingData(
-    headers:               ParsedHeadersFrame,
-    correlationAttributes: Map[AttributeKey[_], _],
-    collectedData:         ByteString,
-    extraInitialWindow:    Int) extends ReceivingData {
+      headers: ParsedHeadersFrame,
+      correlationAttributes: Map[AttributeKey[_], _],
+      collectedData: ByteString,
+      extraInitialWindow: Int) extends ReceivingData {
 
     override protected def onDataFrame(dataFrame: DataFrame): StreamState = {
       val newData = collectedData ++ dataFrame.payload
@@ -331,21 +351,26 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
         dispatchSubstream(headers, Left(newData), correlationAttributes)
         HalfClosedRemoteWaitingForOutgoingStream(extraInitialWindow)
       } else if (newData.length >= settings.minCollectStrictEntitySize)
-        dispatchStream(dataFrame.streamId, headers, newData, correlationAttributes, OpenReceivingDataFirst(_, extraInitialWindow))
+        dispatchStream(dataFrame.streamId, headers, newData, correlationAttributes,
+          OpenReceivingDataFirst(_, extraInitialWindow))
       else
         copy(collectedData = newData)
     }
 
     override protected def onTrailer(parsedHeadersFrame: ParsedHeadersFrame): StreamState = this // trailing headers not supported for requests right now
-    override protected def incrementWindow(delta: Int): StreamState = copy(extraInitialWindow = extraInitialWindow + delta)
+    override protected def incrementWindow(delta: Int): StreamState =
+      copy(extraInitialWindow = extraInitialWindow + delta)
     override protected def onRstStreamFrame(rstStreamFrame: RstStreamFrame): Unit = {} // nothing to do here
   }
-  case class OpenReceivingDataFirst(buffer: IncomingStreamBuffer, extraInitialWindow: Int = 0) extends ReceivingDataWithBuffer(HalfClosedRemoteWaitingForOutgoingStream(extraInitialWindow)) {
-    override def handleOutgoingCreated(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
+  case class OpenReceivingDataFirst(buffer: IncomingStreamBuffer, extraInitialWindow: Int = 0)
+      extends ReceivingDataWithBuffer(HalfClosedRemoteWaitingForOutgoingStream(extraInitialWindow)) {
+    override def handleOutgoingCreated(
+        outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
       outStream.increaseWindow(extraInitialWindow)
       Open(buffer, outStream)
     }
-    override def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState = HalfClosedLocal(buffer)
+    override def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState =
+      HalfClosedLocal(buffer)
     override def handleOutgoingEnded(): StreamState = Closed
 
     override def incrementWindow(delta: Int): StreamState = copy(extraInitialWindow = extraInitialWindow + delta)
@@ -371,7 +396,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       (nextState, res)
     }
 
-    def handleWindowUpdate(windowUpdate: WindowUpdateFrame): StreamState = increaseWindow(windowUpdate.windowSizeIncrement)
+    def handleWindowUpdate(windowUpdate: WindowUpdateFrame): StreamState =
+      increaseWindow(windowUpdate.windowSizeIncrement)
 
     override def handleOutgoingFailed(cause: Throwable): StreamState = Closed
 
@@ -386,7 +412,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
     }
   }
 
-  case class OpenSendingData(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]) extends StreamState with Sending {
+  case class OpenSendingData(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]) extends StreamState
+      with Sending {
     override def handle(event: StreamFrameEvent): StreamState = event match {
       case _: ParsedHeadersFrame =>
         expectIncomingStream(event, HalfClosedRemoteSendingData(outStream), Open(_, outStream), correlationAttributes)
@@ -424,7 +451,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
         } else {
           val nextState = onDataFrame(d)
 
-          val windowSizeIncrement = flowController.onConnectionDataReceived(outstandingConnectionLevelWindow, totalBufferedData)
+          val windowSizeIncrement =
+            flowController.onConnectionDataReceived(outstandingConnectionLevelWindow, totalBufferedData)
           if (windowSizeIncrement > 0) {
             multiplexer.pushControlFrame(WindowUpdateFrame(Http2Protocol.NoStreamId, windowSizeIncrement))
             outstandingConnectionLevelWindow += windowSizeIncrement
@@ -448,7 +476,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
     protected def incrementWindow(delta: Int): StreamState
     protected def onRstStreamFrame(rstStreamFrame: RstStreamFrame): Unit
   }
-  sealed abstract class ReceivingDataWithBuffer(afterEndStreamReceived: StreamState) extends ReceivingData { _: Product =>
+  sealed abstract class ReceivingDataWithBuffer(afterEndStreamReceived: StreamState) extends ReceivingData {
+    _: Product =>
     protected def buffer: IncomingStreamBuffer
 
     override protected def onDataFrame(dataFrame: DataFrame): StreamState = {
@@ -460,7 +489,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       afterBufferEvent
     }
 
-    override protected def onRstStreamFrame(rstStreamFrame: RstStreamFrame): Unit = buffer.onRstStreamFrame(rstStreamFrame)
+    override protected def onRstStreamFrame(rstStreamFrame: RstStreamFrame): Unit =
+      buffer.onRstStreamFrame(rstStreamFrame)
 
     override def incomingStreamPulled(): StreamState = {
       buffer.dispatchNextChunk()
@@ -478,7 +508,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
   }
 
   // on the incoming side there's (almost) no difference between Open and HalfClosedLocal
-  case class Open(buffer: IncomingStreamBuffer, outStream: OutStream) extends ReceivingDataWithBuffer(HalfClosedRemoteSendingData(outStream)) with Sending {
+  case class Open(buffer: IncomingStreamBuffer, outStream: OutStream)
+      extends ReceivingDataWithBuffer(HalfClosedRemoteSendingData(outStream)) with Sending {
     override def handleOutgoingEnded(): StreamState = HalfClosedLocal(buffer)
 
     override protected def onRstStreamFrame(rstStreamFrame: RstStreamFrame): Unit = {
@@ -490,6 +521,7 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       this
     }
   }
+
   /**
    * We have closed the outgoing stream, but the incoming stream is still going.
    */
@@ -505,12 +537,14 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       case _                    => receivedUnexpectedFrame(event)
     }
 
-    override def handleOutgoingCreated(outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
+    override def handleOutgoingCreated(
+        outStream: OutStream, correlationAttributes: Map[AttributeKey[_], _]): StreamState = {
       outStream.increaseWindow(extraInitialWindow)
       HalfClosedRemoteSendingData(outStream)
     }
     override def handleOutgoingCreatedAndFinished(correlationAttributes: Map[AttributeKey[_], _]): StreamState = Closed
   }
+
   /**
    * They have closed the incoming stream, but the outgoing stream is still going.
    */
@@ -580,7 +614,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
           multiplexer.closeStream(streamId)
         } else {
           buffer ++= data.payload
-          debug(s"Received DATA ${data.sizeInWindow} for stream [$streamId], remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}")
+          debug(
+            s"Received DATA ${data.sizeInWindow} for stream [$streamId], remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}")
           dispatchNextChunk()
         }
       }
@@ -607,7 +642,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
 
         totalBufferedData -= dataSize
 
-        debug(s"Dispatched chunk of $dataSize for stream [$streamId], remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}")
+        debug(
+          s"Dispatched chunk of $dataSize for stream [$streamId], remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}")
         updateWindows()
       }
       if (buffer.isEmpty && wasClosed) {
@@ -641,8 +677,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
 
       debug(
         s"adjusting con-level window by $connectionLevel, stream-level window by $streamLevel, " +
-          s"remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}, " +
-          s"remaining connection window space now $outstandingConnectionLevelWindow, total buffered: $totalBufferedData")
+        s"remaining window space now $outstandingStreamWindow, buffered: ${buffer.length}, " +
+        s"remaining connection window space now $outstandingConnectionLevelWindow, total buffered: $totalBufferedData")
     }
 
     def shutdown(): Unit =
@@ -672,11 +708,10 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
     }
   }
   final class OutStreamImpl(
-    val streamId:           Int,
-    private var maybeInlet: OptionVal[SubSinkInlet[_]],
-    var outboundWindowLeft: Int,
-    var trailer:            OptionVal[ParsedHeadersFrame]
-  ) extends InHandler with OutStream {
+      val streamId: Int,
+      private var maybeInlet: OptionVal[SubSinkInlet[_]],
+      var outboundWindowLeft: Int,
+      var trailer: OptionVal[ParsedHeadersFrame]) extends InHandler with OutStream {
     private def inlet: SubSinkInlet[_] = maybeInlet.get
 
     private var buffer: ByteString = ByteString.empty
@@ -722,7 +757,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       } else
         maybePull()
 
-      debug(s"[$streamId] sending ${toSend.length} bytes, endStream = $endStream, remaining buffer [${buffer.length}], remaining stream-level WINDOW [$outboundWindowLeft]")
+      debug(
+        s"[$streamId] sending ${toSend.length} bytes, endStream = $endStream, remaining buffer [${buffer.length}], remaining stream-level WINDOW [$outboundWindowLeft]")
 
       // Multiplexer will enqueue for us if we report more data being available
       // We cannot call `multiplexer.enqueueOutStream` from here because this is called from the multiplexer.
@@ -745,7 +781,8 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
       // TODO: Check that buffer is not too much over the limit (which we might warn the user about)
       //       The problem here is that backpressure will only work properly if batch elements like
       //       ByteString have a reasonable size.
-      if (!upstreamClosed && buffer.length < multiplexer.maxBytesToBufferPerSubstream && !inlet.hasBeenPulled && !inlet.isClosed) inlet.pull()
+      if (!upstreamClosed && buffer.length < multiplexer.maxBytesToBufferPerSubstream && !inlet.hasBeenPulled && !inlet.isClosed)
+        inlet.pull()
 
     /** Cleans up internal state (but not external) */
     private def cleanupStream(): Unit = {
@@ -777,8 +814,11 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
         case HttpEntity.Chunk(newData, _) => buffer ++= newData
         case HttpEntity.LastChunk(_, headers) =>
           if (headers.nonEmpty && !trailer.isEmpty)
-            log.warning("Found both an attribute with trailing headers, and headers in the `LastChunk`. This is not supported.")
-          trailer = OptionVal.Some(ParsedHeadersFrame(streamId, endStream = true, HttpMessageRendering.renderHeaders(headers, log, isServer, shouldRenderAutoHeaders = false, dateHeaderRendering = DateHeaderRendering.Unavailable), None))
+            log.warning(
+              "Found both an attribute with trailing headers, and headers in the `LastChunk`. This is not supported.")
+          trailer = OptionVal.Some(ParsedHeadersFrame(streamId, endStream = true,
+            HttpMessageRendering.renderHeaders(headers, log, isServer, shouldRenderAutoHeaders = false,
+              dateHeaderRendering = DateHeaderRendering.Unavailable), None))
       }
 
       maybePull()
@@ -805,9 +845,11 @@ private[http2] trait Http2StreamHandling { self: GraphStageLogic with LogHelper
     }
   }
   // needed once PUSH_PROMISE support was added
-  //case object ReservedLocal extends IncomingStreamState
-  //case object ReservedRemote extends IncomingStreamState
+  // case object ReservedLocal extends IncomingStreamState
+  // case object ReservedRemote extends IncomingStreamState
 }
 private[http2] object Http2StreamHandling {
-  val ConnectionWasAbortedException = new IllegalStateException("The HTTP/2 connection was shut down while the request was still ongoing") with NoStackTrace
+  val ConnectionWasAbortedException =
+    new IllegalStateException("The HTTP/2 connection was shut down while the request was still ongoing")
+      with NoStackTrace
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/HttpMessageRendering.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/HttpMessageRendering.scala
index caf7afee2..f5731aad7 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/HttpMessageRendering.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/HttpMessageRendering.scala
@@ -19,14 +19,17 @@ import scala.collection.immutable.VectorBuilder
 
 /** INTERNAL API */
 @InternalApi
-private[http2] class ResponseRendering(settings: ServerSettings, val log: LoggingAdapter, val dateHeaderRendering: DateHeaderRendering) extends MessageRendering[HttpResponse] {
+private[http2] class ResponseRendering(settings: ServerSettings, val log: LoggingAdapter,
+    val dateHeaderRendering: DateHeaderRendering) extends MessageRendering[HttpResponse] {
 
   private def failBecauseOfMissingAttribute: Nothing =
     // attribute is missing, shutting down because we will most likely otherwise miss a response and leak a substream
     // TODO: optionally a less drastic measure would be only resetting all the active substreams
-    throw new RuntimeException("Received response for HTTP/2 request without x-http2-stream-id attribute. Failing connection.")
+    throw new RuntimeException(
+      "Received response for HTTP/2 request without x-http2-stream-id attribute. Failing connection.")
 
-  protected override def nextStreamId(response: HttpResponse): Int = response.attribute(Http2.streamId).getOrElse(failBecauseOfMissingAttribute)
+  protected override def nextStreamId(response: HttpResponse): Int =
+    response.attribute(Http2.streamId).getOrElse(failBecauseOfMissingAttribute)
 
   protected override def initialHeaderPairs(response: HttpResponse): VectorBuilder[(String, String)] = {
     val headerPairs = new VectorBuilder[(String, String)]()
@@ -42,7 +45,8 @@ private[http2] class ResponseRendering(settings: ServerSettings, val log: Loggin
 
 /** INTERNAL API */
 @InternalApi
-private[http2] class RequestRendering(settings: ClientConnectionSettings, val log: LoggingAdapter) extends MessageRendering[HttpRequest] {
+private[http2] class RequestRendering(
+    settings: ClientConnectionSettings, val log: LoggingAdapter) extends MessageRendering[HttpRequest] {
 
   private val streamId = new AtomicInteger(1)
   protected override def nextStreamId(r: HttpRequest): Int = streamId.getAndAdd(2)
@@ -56,7 +60,8 @@ private[http2] class RequestRendering(settings: ClientConnectionSettings, val lo
     headerPairs
   }
 
-  override lazy val peerIdHeader: Option[(String, String)] = settings.userAgentHeader.map(h => h.lowercaseName -> h.value)
+  override lazy val peerIdHeader: Option[(String, String)] =
+    settings.userAgentHeader.map(h => h.lowercaseName -> h.value)
 
   override protected def dateHeaderRendering: DateHeaderRendering = DateHeaderRendering.Unavailable
 }
@@ -75,23 +80,27 @@ private[http2] sealed abstract class MessageRendering[R <: HttpMessage] extends
     val headerPairs = initialHeaderPairs(r)
 
     HttpMessageRendering.addContentHeaders(headerPairs, r.entity)
-    HttpMessageRendering.renderHeaders(r.headers, headerPairs, peerIdHeader, log, isServer = r.isResponse, shouldRenderAutoHeaders = true, dateHeaderRendering)
+    HttpMessageRendering.renderHeaders(r.headers, headerPairs, peerIdHeader, log, isServer = r.isResponse,
+      shouldRenderAutoHeaders = true, dateHeaderRendering)
 
     val streamId = nextStreamId(r)
     val headersFrame = ParsedHeadersFrame(streamId, endStream = r.entity.isKnownEmpty, headerPairs.result(), None)
     val trailingHeadersFrame =
       r.attribute(AttributeKeys.trailer) match {
-        case Some(trailer) if trailer.headers.nonEmpty => OptionVal.Some(ParsedHeadersFrame(streamId, endStream = true, trailer.headers, None))
-        case None                                      => OptionVal.None
+        case Some(trailer) if trailer.headers.nonEmpty =>
+          OptionVal.Some(ParsedHeadersFrame(streamId, endStream = true, trailer.headers, None))
+        case None => OptionVal.None
       }
 
-    Http2SubStream(r.entity, headersFrame, trailingHeadersFrame, r.attributes.filter(_._2.isInstanceOf[RequestResponseAssociation]))
+    Http2SubStream(r.entity, headersFrame, trailingHeadersFrame,
+      r.attributes.filter(_._2.isInstanceOf[RequestResponseAssociation]))
   }
 }
 
 /** INTERNAL API */
 @InternalApi
 private[http2] object HttpMessageRendering {
+
   /**
    * Mutates `headerPairs` adding headers related to content (type and length).
    */
@@ -102,12 +111,11 @@ private[http2] object HttpMessageRendering {
   }
 
   def renderHeaders(
-    headers:                 immutable.Seq[HttpHeader],
-    log:                     LoggingAdapter,
-    isServer:                Boolean,
-    shouldRenderAutoHeaders: Boolean,
-    dateHeaderRendering:     DateHeaderRendering
-  ): Seq[(String, String)] = {
+      headers: immutable.Seq[HttpHeader],
+      log: LoggingAdapter,
+      isServer: Boolean,
+      shouldRenderAutoHeaders: Boolean,
+      dateHeaderRendering: DateHeaderRendering): Seq[(String, String)] = {
     val headerPairs = new VectorBuilder[(String, String)]()
     renderHeaders(headers, headerPairs, None, log, isServer, shouldRenderAutoHeaders, dateHeaderRendering)
     headerPairs.result()
@@ -119,14 +127,13 @@ private[http2] object HttpMessageRendering {
    *                     peer. For example, a User-Agent on the client or a Server header on the server.
    */
   def renderHeaders(
-    headersSeq:              immutable.Seq[HttpHeader],
-    headerPairs:             VectorBuilder[(String, String)],
-    peerIdHeader:            Option[(String, String)],
-    log:                     LoggingAdapter,
-    isServer:                Boolean,
-    shouldRenderAutoHeaders: Boolean,
-    dateHeaderRendering:     DateHeaderRendering
-  ): Unit = {
+      headersSeq: immutable.Seq[HttpHeader],
+      headerPairs: VectorBuilder[(String, String)],
+      peerIdHeader: Option[(String, String)],
+      log: LoggingAdapter,
+      isServer: Boolean,
+      shouldRenderAutoHeaders: Boolean,
+      dateHeaderRendering: DateHeaderRendering): Unit = {
     def suppressionWarning(h: HttpHeader, msg: String): Unit =
       log.warning("Explicitly set HTTP header '{}' is ignored, {}", h, msg)
 
@@ -154,15 +161,18 @@ private[http2] object HttpMessageRendering {
           case x: CustomHeader =>
             addHeader(x)
 
-          case x: RawHeader if (x is "content-type") || (x is "content-length") || (x is "transfer-encoding") ||
-            (x is "date") || (x is "server") || (x is "user-agent") || (x is "connection") =>
+          case x: RawHeader
+              if (x.is("content-type")) || (x.is("content-length")) || (x.is("transfer-encoding")) ||
+              (x.is("date")) || (x.is("server")) || (x.is("user-agent")) || (x.is("connection")) =>
             suppressionWarning(x, "illegal RawHeader")
 
           case x: `Content-Length` =>
-            suppressionWarning(x, "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
+            suppressionWarning(x,
+              "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
 
           case x: `Content-Type` =>
-            suppressionWarning(x, "explicit `Content-Type` header is not allowed. Set `HttpResponse.entity.contentType` instead.")
+            suppressionWarning(x,
+              "explicit `Content-Type` header is not allowed. Set `HttpResponse.entity.contentType` instead.")
 
           case x: `Transfer-Encoding` =>
             suppressionWarning(x, "`Transfer-Encoding` header is not allowed for HTTP/2")
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/IncomingFlowController.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/IncomingFlowController.scala
index 16af9bd7c..1c46d0211 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/IncomingFlowController.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/IncomingFlowController.scala
@@ -13,7 +13,7 @@ private[http2] trait IncomingFlowController {
   def onConnectionDataReceived(outstandingConnectionLevelWindow: Int, totalBufferedData: Int): Int
 
   def onStreamDataDispatched(outstandingConnectionLevelWindow: Int, totalBufferedData: Int,
-                             outstandingStreamLevelWindow: Int, streamBufferedData: Int): IncomingFlowController.WindowIncrements
+      outstandingStreamLevelWindow: Int, streamBufferedData: Int): IncomingFlowController.WindowIncrements
 }
 
 /** INTERNAL API */
@@ -36,11 +36,11 @@ private[http2] object IncomingFlowController {
       def onConnectionDataReceived(outstandingConnectionLevelWindow: Int, totalBufferedData: Int): Int =
         ifMoreThanHalfUsed(maximumConnectionLevelWindow, outstandingConnectionLevelWindow, totalBufferedData)
 
-      def onStreamDataDispatched(outstandingConnectionLevelWindow: Int, totalBufferedData: Int, outstandingStreamLevelWindow: Int, streamBufferedData: Int): WindowIncrements =
+      def onStreamDataDispatched(outstandingConnectionLevelWindow: Int, totalBufferedData: Int,
+          outstandingStreamLevelWindow: Int, streamBufferedData: Int): WindowIncrements =
         WindowIncrements(
           onConnectionDataReceived(outstandingConnectionLevelWindow, totalBufferedData),
-          ifMoreThanHalfUsed(maximumStreamLevelWindow, outstandingStreamLevelWindow, streamBufferedData)
-        )
+          ifMoreThanHalfUsed(maximumStreamLevelWindow, outstandingStreamLevelWindow, streamBufferedData))
 
       private def ifMoreThanHalfUsed(max: Int, outstanding: Int, buffered: Int): Int = {
         val totalReservedSpace = outstanding + buffered
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/OutgoingConnectionBuilderImpl.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/OutgoingConnectionBuilderImpl.scala
index 772547d3a..a72d6b034 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/OutgoingConnectionBuilderImpl.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/OutgoingConnectionBuilderImpl.scala
@@ -40,42 +40,47 @@ private[akka] object OutgoingConnectionBuilderImpl {
       connectionContext = None,
       log = system.classicSystem.log,
       system = system,
-      usingHttp2 = false
-    )
+      usingHttp2 = false)
 
   private final case class Impl(
-    host:                     String,
-    port:                     Option[Int],
-    clientConnectionSettings: ClientConnectionSettings,
-    connectionContext:        Option[HttpsConnectionContext],
-    log:                      LoggingAdapter,
-    system:                   ClassicActorSystemProvider,
-    usingHttp2:               Boolean) extends OutgoingConnectionBuilder {
+      host: String,
+      port: Option[Int],
+      clientConnectionSettings: ClientConnectionSettings,
+      connectionContext: Option[HttpsConnectionContext],
+      log: LoggingAdapter,
+      system: ClassicActorSystemProvider,
+      usingHttp2: Boolean) extends OutgoingConnectionBuilder {
 
     override def toHost(host: String): OutgoingConnectionBuilder = copy(host = host)
 
     override def toPort(port: Int): OutgoingConnectionBuilder = copy(port = Some(port))
 
-    override def withCustomHttpsConnectionContext(httpsConnectionContext: HttpsConnectionContext): OutgoingConnectionBuilder = copy(connectionContext = Some(httpsConnectionContext))
+    override def withCustomHttpsConnectionContext(
+        httpsConnectionContext: HttpsConnectionContext): OutgoingConnectionBuilder =
+      copy(connectionContext = Some(httpsConnectionContext))
 
-    override def withClientConnectionSettings(settings: ClientConnectionSettings): OutgoingConnectionBuilder = copy(clientConnectionSettings = settings)
+    override def withClientConnectionSettings(settings: ClientConnectionSettings): OutgoingConnectionBuilder =
+      copy(clientConnectionSettings = settings)
 
     override def logTo(logger: LoggingAdapter): OutgoingConnectionBuilder = copy(log = logger)
 
     override def http(): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
       // http/1.1 plaintext
-      Http(system).outgoingConnectionUsingContext(host, port.getOrElse(80), ConnectionContext.noEncryption(), clientConnectionSettings, log)
+      Http(system).outgoingConnectionUsingContext(host, port.getOrElse(80), ConnectionContext.noEncryption(),
+        clientConnectionSettings, log)
     }
 
     override def https(): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
       // http/1.1 tls
-      Http(system).outgoingConnectionHttps(host, port.getOrElse(443), connectionContext.getOrElse(Http(system).defaultClientHttpsContext), None, clientConnectionSettings, log)
+      Http(system).outgoingConnectionHttps(host, port.getOrElse(443),
+        connectionContext.getOrElse(Http(system).defaultClientHttpsContext), None, clientConnectionSettings, log)
     }
 
     override def http2(): Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]] = {
       // http/2 tls
       val port = this.port.getOrElse(443)
-      Http2(system).outgoingConnection(host, port, connectionContext.getOrElse(Http(system).defaultClientHttpsContext), clientConnectionSettings, log)
+      Http2(system).outgoingConnection(host, port, connectionContext.getOrElse(Http(system).defaultClientHttpsContext),
+        clientConnectionSettings, log)
     }
 
     override def managedPersistentHttp2(): Flow[HttpRequest, HttpResponse, NotUsed] =
@@ -98,43 +103,56 @@ private[akka] object OutgoingConnectionBuilderImpl {
 
   private class JavaAdapter(actual: Impl) extends JOutgoingConnectionBuilder {
 
-    override def toHost(host: String): JOutgoingConnectionBuilder = new JavaAdapter(actual.toHost(host).asInstanceOf[Impl])
+    override def toHost(host: String): JOutgoingConnectionBuilder =
+      new JavaAdapter(actual.toHost(host).asInstanceOf[Impl])
 
     override def toPort(port: Int): JOutgoingConnectionBuilder = new JavaAdapter(actual.toPort(port).asInstanceOf[Impl])
 
-    override def http(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
+    override def http()
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
       javaFlow(actual.http())
 
-    override def https(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
+    override def https()
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
       javaFlow(actual.https())
 
     override def managedPersistentHttp2(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, NotUsed] =
       javaFlowKeepMatVal(actual.managedPersistentHttp2())
 
-    override def http2WithPriorKnowledge(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
+    override def http2WithPriorKnowledge()
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
       javaFlow(actual.http2WithPriorKnowledge())
 
-    override def managedPersistentHttp2WithPriorKnowledge(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, NotUsed] =
+    override def managedPersistentHttp2WithPriorKnowledge()
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, NotUsed] =
       javaFlowKeepMatVal(actual.managedPersistentHttp2WithPriorKnowledge())
 
-    override def http2(): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
+    override def http2()
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] =
       javaFlow(actual.http2())
 
-    override def withCustomHttpsConnectionContext(httpsConnectionContext: javadsl.HttpsConnectionContext): JOutgoingConnectionBuilder =
-      new JavaAdapter(actual.withCustomHttpsConnectionContext(httpsConnectionContext.asInstanceOf[HttpsConnectionContext]).asInstanceOf[Impl])
+    override def withCustomHttpsConnectionContext(
+        httpsConnectionContext: javadsl.HttpsConnectionContext): JOutgoingConnectionBuilder =
+      new JavaAdapter(actual.withCustomHttpsConnectionContext(
+        httpsConnectionContext.asInstanceOf[HttpsConnectionContext]).asInstanceOf[Impl])
 
-    override def withClientConnectionSettings(settings: akka.http.javadsl.settings.ClientConnectionSettings): JOutgoingConnectionBuilder =
-      new JavaAdapter(actual.withClientConnectionSettings(settings.asInstanceOf[ClientConnectionSettings]).asInstanceOf[Impl])
+    override def withClientConnectionSettings(
+        settings: akka.http.javadsl.settings.ClientConnectionSettings): JOutgoingConnectionBuilder =
+      new JavaAdapter(
+        actual.withClientConnectionSettings(settings.asInstanceOf[ClientConnectionSettings]).asInstanceOf[Impl])
 
     override def logTo(logger: LoggingAdapter): JOutgoingConnectionBuilder =
       new JavaAdapter(actual.logTo(logger).asInstanceOf[Impl])
 
-    private def javaFlow(flow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]]): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] = {
+    private def javaFlow(flow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]])
+        : JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, CompletionStage[javadsl.OutgoingConnection]] = {
       import scala.compat.java8.FutureConverters.toJava
-      javaFlowKeepMatVal(flow.mapMaterializedValue(f => toJava(f.map(oc => new javadsl.OutgoingConnection(oc))(ExecutionContexts.parasitic))))
+      javaFlowKeepMatVal(flow.mapMaterializedValue(f =>
+        toJava(f.map(oc => new javadsl.OutgoingConnection(oc))(ExecutionContexts.parasitic))))
     }
 
-    private def javaFlowKeepMatVal[M](flow: Flow[HttpRequest, HttpResponse, M]): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, M] =
+    private def javaFlowKeepMatVal[M](
+        flow: Flow[HttpRequest, HttpResponse, M]): JFlow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, M] =
       flow.asInstanceOf[Flow[javadsl.model.HttpRequest, javadsl.model.HttpResponse, M]].asJava
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/PriorityTree.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/PriorityTree.scala
index 0fa07e1f5..cdce78657 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/PriorityTree.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/PriorityTree.scala
@@ -23,6 +23,7 @@ private[http2] trait PriorityNode {
 /** INTERNAL API */
 @InternalApi
 private[http2] trait PriorityTree {
+
   /**
    * Returns a new priority tree containing the new or existing and updated stream.
    */
@@ -73,7 +74,8 @@ private[http2] object PriorityTree {
         insertNode(PriorityInfo(streamDependency, 0, DefaultWeight, TreeSet.empty)) // try again after creating intermediate
           .insert(streamId, streamDependency, weight, exclusive)
     }
-    private def update(streamId: Int, newStreamDependency: Int, newWeight: Int, newlyExclusive: Boolean): PriorityTree = {
+    private def update(
+        streamId: Int, newStreamDependency: Int, newWeight: Int, newlyExclusive: Boolean): PriorityTree = {
       require(nodes.isDefinedAt(streamId), s"Not must exist to be updated: $streamId")
       require(streamId != newStreamDependency, s"Stream cannot depend on itself: $streamId")
 
@@ -103,11 +105,9 @@ private[http2] object PriorityTree {
 
       create(
         (nodes - streamId) +
-          (info.streamDependency -> dependencyInfo.copy(childrenIds = dependencyInfo.childrenIds - streamId)) ++
-          info.childrenIds.unsorted.map(id =>
-            id -> nodes(id).copy(streamDependency = info.streamDependency)
-          )
-      )
+        (info.streamDependency -> dependencyInfo.copy(childrenIds = dependencyInfo.childrenIds - streamId)) ++
+        info.childrenIds.unsorted.map(id =>
+          id -> nodes(id).copy(streamDependency = info.streamDependency)))
     }
 
     private def dependsTransitivelyOn(child: Int, parent: Int): Boolean = {
@@ -128,7 +128,8 @@ private[http2] object PriorityTree {
       updateNodes { nodes =>
         nodes.updated(streamId, updater(nodes(streamId)))
       }
-    private def updateChildren(updater: immutable.TreeSet[Int] => immutable.TreeSet[Int]): PriorityInfo => PriorityInfo = { old =>
+    private def updateChildren(
+        updater: immutable.TreeSet[Int] => immutable.TreeSet[Int]): PriorityInfo => PriorityInfo = { old =>
       old.copy(childrenIds = updater(old.childrenIds))
     }
     private def insertNode(newNode: PriorityInfo): PriorityTreeImpl =
@@ -149,9 +150,8 @@ private[http2] object PriorityTree {
   }
 
   private case class PriorityInfo(
-    streamId:         Int,
-    streamDependency: Int,
-    weight:           Int,
-    childrenIds:      immutable.TreeSet[Int]
-  )
+      streamId: Int,
+      streamDependency: Int,
+      weight: Int,
+      childrenIds: immutable.TreeSet[Int])
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ProtocolSwitch.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ProtocolSwitch.scala
index b9b639eec..b85af1b39 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ProtocolSwitch.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/ProtocolSwitch.scala
@@ -20,9 +20,9 @@ import scala.concurrent.{ Future, Promise }
 @InternalApi
 private[http] object ProtocolSwitch {
   def apply(
-    chosenProtocolAccessor: SessionBytes => String,
-    http1Stack:             HttpImplementation,
-    http2Stack:             HttpImplementation): Flow[SslTlsInbound, SslTlsOutbound, Future[ServerTerminator]] =
+      chosenProtocolAccessor: SessionBytes => String,
+      http1Stack: HttpImplementation,
+      http2Stack: HttpImplementation): Flow[SslTlsInbound, SslTlsOutbound, Future[ServerTerminator]] =
     Flow.fromGraph(
       new GraphStageWithMaterializedValue[FlowShape[SslTlsInbound, SslTlsOutbound], Future[ServerTerminator]] {
 
@@ -34,7 +34,8 @@ private[http] object ProtocolSwitch {
         val shape: FlowShape[SslTlsInbound, SslTlsOutbound] =
           FlowShape(netIn, netOut)
 
-        override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, Future[ServerTerminator]) = {
+        override def createLogicAndMaterializedValue(
+            inheritedAttributes: Attributes): (GraphStageLogic, Future[ServerTerminator]) = {
           val terminatorPromise = Promise[ServerTerminator]()
 
           object Logic extends GraphStageLogic(shape) {
@@ -47,18 +48,20 @@ private[http] object ProtocolSwitch {
 
             override def preStart(): Unit = pull(netIn)
 
-            setHandler(netIn, new InHandler {
-              def onPush(): Unit =
-                grab(netIn) match {
-                  case first @ SessionBytes(session, bytes) =>
-                    val chosen = chosenProtocolAccessor(first)
-                    chosen match {
-                      case "h2" => install(http2Stack.addAttributes(HttpAttributes.tlsSessionInfo(session)), first)
-                      case _    => install(http1Stack, first)
-                    }
-                  case SessionTruncated => failStage(new SSLException("TLS session was truncated (probably missing a close_notify packet)."))
-                }
-            })
+            setHandler(netIn,
+              new InHandler {
+                def onPush(): Unit =
+                  grab(netIn) match {
+                    case first @ SessionBytes(session, bytes) =>
+                      val chosen = chosenProtocolAccessor(first)
+                      chosen match {
+                        case "h2" => install(http2Stack.addAttributes(HttpAttributes.tlsSessionInfo(session)), first)
+                        case _    => install(http1Stack, first)
+                      }
+                    case SessionTruncated =>
+                      failStage(new SSLException("TLS session was truncated (probably missing a close_notify packet)."))
+                  }
+              })
 
             private val ignorePull = new OutHandler {
               def onPull(): Unit = ()
@@ -77,8 +80,7 @@ private[http] object ProtocolSwitch {
                 Attributes(
                   // don't (re)set dispatcher attribute to avoid adding an explicit async boundary
                   // between low-level and high-level stages
-                  inheritedAttributes.attributeList.filterNot(_.isInstanceOf[Dispatcher])
-                )
+                  inheritedAttributes.attributeList.filterNot(_.isInstanceOf[Dispatcher]))
 
               val serverTerminator =
                 serverImplementation
@@ -113,19 +115,20 @@ private[http] object ProtocolSwitch {
                 }
 
               out.setHandler(firstHandler)
-              setHandler(in, new InHandler {
-                override def onPush(): Unit = out.push(grab(in))
+              setHandler(in,
+                new InHandler {
+                  override def onPush(): Unit = out.push(grab(in))
 
-                override def onUpstreamFinish(): Unit = {
-                  out.complete()
-                  super.onUpstreamFinish()
-                }
+                  override def onUpstreamFinish(): Unit = {
+                    out.complete()
+                    super.onUpstreamFinish()
+                  }
 
-                override def onUpstreamFailure(ex: Throwable): Unit = {
-                  out.fail(ex)
-                  super.onUpstreamFailure(ex)
-                }
-              })
+                  override def onUpstreamFailure(ex: Throwable): Unit = {
+                    out.fail(ex)
+                    super.onUpstreamFailure(ex)
+                  }
+                })
 
               if (out.isAvailable) pull(in) // to account for lost pulls during initialization
             }
@@ -152,10 +155,10 @@ private[http] object ProtocolSwitch {
 
           (Logic, terminatorPromise.future)
         }
-      }
-    )
+      })
 
-  def byPreface(http1Stack: HttpImplementation, http2Stack: HttpImplementation): Flow[SslTlsInbound, SslTlsOutbound, Future[ServerTerminator]] = {
+  def byPreface(http1Stack: HttpImplementation, http2Stack: HttpImplementation)
+      : Flow[SslTlsInbound, SslTlsOutbound, Future[ServerTerminator]] = {
     def chooseProtocol(sessionBytes: SessionBytes): String =
       if (sessionBytes.bytes.startsWith(Http2Protocol.ClientConnectionPreface)) "h2" else "http/1.1"
     ProtocolSwitch(chooseProtocol, http1Stack, http2Stack)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/RequestParsing.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/RequestParsing.scala
index 26669727b..792c296a9 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/RequestParsing.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/RequestParsing.scala
@@ -28,11 +28,13 @@ import scala.collection.immutable.VectorBuilder
 private[http2] object RequestParsing {
 
   @nowarn("msg=use remote-address-attribute instead")
-  def parseRequest(httpHeaderParser: HttpHeaderParser, serverSettings: ServerSettings, streamAttributes: Attributes): Http2SubStream => HttpRequest = {
+  def parseRequest(httpHeaderParser: HttpHeaderParser, serverSettings: ServerSettings, streamAttributes: Attributes)
+      : Http2SubStream => HttpRequest = {
 
     val remoteAddressHeader: Option[`Remote-Address`] =
       if (serverSettings.remoteAddressHeader) {
-        streamAttributes.get[HttpAttributes.RemoteAddress].map(remote => model.headers.`Remote-Address`(RemoteAddress(remote.address)))
+        streamAttributes.get[HttpAttributes.RemoteAddress].map(remote =>
+          model.headers.`Remote-Address`(RemoteAddress(remote.address)))
         // in order to avoid searching all the time for the attribute, we need to guard it with the setting condition
       } else None // no need to emit the remote address header
 
@@ -68,15 +70,14 @@ private[http2] object RequestParsing {
 
     { subStream =>
       def createRequest(
-        method:          HttpMethod,
-        scheme:          String,
-        authority:       Uri.Authority,
-        pathAndRawQuery: (Uri.Path, Option[String]),
-        contentType:     OptionVal[ContentType],
-        contentLength:   Long,
-        cookies:         StringBuilder,
-        headers:         VectorBuilder[HttpHeader]
-      ): HttpRequest = {
+          method: HttpMethod,
+          scheme: String,
+          authority: Uri.Authority,
+          pathAndRawQuery: (Uri.Path, Option[String]),
+          contentType: OptionVal[ContentType],
+          contentLength: Long,
+          cookies: StringBuilder,
+          headers: VectorBuilder[HttpHeader]): HttpRequest = {
         // https://httpwg.org/specs/rfc7540.html#rfc.section.8.1.2.3: these pseudo header fields are mandatory for a request
         checkRequiredPseudoHeader(":scheme", scheme)
         checkRequiredPseudoHeader(":method", method)
@@ -102,19 +103,19 @@ private[http2] object RequestParsing {
 
       @tailrec
       def rec(
-        incomingHeaders:   IndexedSeq[(String, AnyRef)],
-        offset:            Int,
-        method:            HttpMethod                   = null,
-        scheme:            String                       = null,
-        authority:         Uri.Authority                = null,
-        pathAndRawQuery:   (Uri.Path, Option[String])   = null,
-        contentType:       OptionVal[ContentType]       = OptionVal.None,
-        contentLength:     Long                         = -1,
-        cookies:           StringBuilder                = null,
-        seenRegularHeader: Boolean                      = false,
-        headers:           VectorBuilder[HttpHeader]    = new VectorBuilder[HttpHeader]
-      ): HttpRequest =
-        if (offset == incomingHeaders.size) createRequest(method, scheme, authority, pathAndRawQuery, contentType, contentLength, cookies, headers)
+          incomingHeaders: IndexedSeq[(String, AnyRef)],
+          offset: Int,
+          method: HttpMethod = null,
+          scheme: String = null,
+          authority: Uri.Authority = null,
+          pathAndRawQuery: (Uri.Path, Option[String]) = null,
+          contentType: OptionVal[ContentType] = OptionVal.None,
+          contentLength: Long = -1,
+          cookies: StringBuilder = null,
+          seenRegularHeader: Boolean = false,
+          headers: VectorBuilder[HttpHeader] = new VectorBuilder[HttpHeader]): HttpRequest =
+        if (offset == incomingHeaders.size)
+          createRequest(method, scheme, authority, pathAndRawQuery, contentType, contentLength, cookies, headers)
         else {
           import hpack.Http2HeaderParsing._
           val (name, value) = incomingHeaders(offset)
@@ -122,28 +123,33 @@ private[http2] object RequestParsing {
             case ":scheme" =>
               checkUniquePseudoHeader(":scheme", scheme)
               checkNoRegularHeadersBeforePseudoHeader(":scheme", seenRegularHeader)
-              rec(incomingHeaders, offset + 1, method, Scheme.get(value), authority, pathAndRawQuery, contentType, contentLength, cookies, seenRegularHeader, headers)
+              rec(incomingHeaders, offset + 1, method, Scheme.get(value), authority, pathAndRawQuery, contentType,
+                contentLength, cookies, seenRegularHeader, headers)
 
             case ":method" =>
               checkUniquePseudoHeader(":method", method)
               checkNoRegularHeadersBeforePseudoHeader(":method", seenRegularHeader)
 
-              rec(incomingHeaders, offset + 1, Method.get(value), scheme, authority, pathAndRawQuery, contentType, contentLength, cookies, seenRegularHeader, headers)
+              rec(incomingHeaders, offset + 1, Method.get(value), scheme, authority, pathAndRawQuery, contentType,
+                contentLength, cookies, seenRegularHeader, headers)
 
             case ":path" =>
               checkUniquePseudoHeader(":path", pathAndRawQuery)
               checkNoRegularHeadersBeforePseudoHeader(":path", seenRegularHeader)
-              rec(incomingHeaders, offset + 1, method, scheme, authority, PathAndQuery.get(value), contentType, contentLength, cookies, seenRegularHeader, headers)
+              rec(incomingHeaders, offset + 1, method, scheme, authority, PathAndQuery.get(value), contentType,
+                contentLength, cookies, seenRegularHeader, headers)
 
             case ":authority" =>
               checkUniquePseudoHeader(":authority", authority)
               checkNoRegularHeadersBeforePseudoHeader(":authority", seenRegularHeader)
 
-              rec(incomingHeaders, offset + 1, method, scheme, Authority.get(value), pathAndRawQuery, contentType, contentLength, cookies, seenRegularHeader, headers)
+              rec(incomingHeaders, offset + 1, method, scheme, Authority.get(value), pathAndRawQuery, contentType,
+                contentLength, cookies, seenRegularHeader, headers)
 
             case "content-type" =>
               if (contentType.isEmpty)
-                rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, OptionVal.Some(ContentType.get(value)), contentLength, cookies, true, headers)
+                rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery,
+                  OptionVal.Some(ContentType.get(value)), contentLength, cookies, true, headers)
               else
                 malformedRequest("HTTP message must not contain more than one content-type header")
 
@@ -153,8 +159,10 @@ private[http2] object RequestParsing {
             case "content-length" =>
               if (contentLength == -1) {
                 val contentLengthValue = ContentLength.get(value).toLong
-                if (contentLengthValue < 0) malformedRequest("HTTP message must not contain a negative content-length header")
-                rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType, contentLengthValue, cookies, true, headers)
+                if (contentLengthValue < 0)
+                  malformedRequest("HTTP message must not contain a negative content-length header")
+                rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType,
+                  contentLengthValue, cookies, true, headers)
               } else malformedRequest("HTTP message must not contain more than one content-length header")
 
             case "cookie" =>
@@ -165,16 +173,19 @@ private[http2] object RequestParsing {
                 cookies.append("; ") // Append octets as required by the spec
               }
               cookiesBuilder.append(Cookie.get(value))
-              rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType, contentLength, cookiesBuilder, true, headers)
+              rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType, contentLength,
+                cookiesBuilder, true, headers)
 
             case _ =>
-              rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType, contentLength, cookies, true, headers += OtherHeader.get(value))
+              rec(incomingHeaders, offset + 1, method, scheme, authority, pathAndRawQuery, contentType, contentLength,
+                cookies, true, headers += OtherHeader.get(value))
           }
         }
 
       val incomingHeaders = subStream.initialHeaders.keyValuePairs.toIndexedSeq
       if (incomingHeaders.size > serverSettings.parserSettings.maxHeaderCount)
-        malformedRequest(s"HTTP message contains more than the configured limit of ${serverSettings.parserSettings.maxHeaderCount} headers")
+        malformedRequest(
+          s"HTTP message contains more than the configured limit of ${serverSettings.parserSettings.maxHeaderCount} headers")
       else rec(incomingHeaders, 0)
     }
   }
@@ -206,7 +217,8 @@ private[http2] object RequestParsing {
       malformedRequest("Header 'Transfer-Encoding' must not be used with HTTP/2")
     case "te" =>
       // https://tools.ietf.org/html/rfc7540#section-8.1.2.2
-      if (httpHeader.value.compareToIgnoreCase("trailers") != 0) malformedRequest(s"Header 'TE' must not contain value other than 'trailers', value was '${httpHeader.value}")
+      if (httpHeader.value.compareToIgnoreCase("trailers") != 0)
+        malformedRequest(s"Header 'TE' must not contain value other than 'trailers', value was '${httpHeader.value}")
     case _ => // ok
   }
 
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/StreamPrioritizer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/StreamPrioritizer.scala
index b26685a77..52d6a6ac1 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/StreamPrioritizer.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/StreamPrioritizer.scala
@@ -17,6 +17,7 @@ import FrameEvent.PriorityFrame
  */
 @InternalApi
 private[http2] trait StreamPrioritizer {
+
   /** Update priority information for a substream */
   def updatePriority(priorityFrame: PriorityFrame): Unit
 
@@ -27,6 +28,7 @@ private[http2] trait StreamPrioritizer {
 /** INTERNAL API */
 @InternalApi
 private[http2] object StreamPrioritizer {
+
   /** A prioritizer that ignores priority information and just sends to the first stream */
   object First extends StreamPrioritizer {
     def updatePriority(priorityFrame: PriorityFrame): Unit = ()
@@ -38,12 +40,14 @@ private[http2] object StreamPrioritizer {
       private var priorityTree = PriorityTree()
 
       def updatePriority(info: PriorityFrame): Unit = {
-        priorityTree = priorityTree.insertOrUpdate(info.streamId, info.streamDependency, info.weight, info.exclusiveFlag)
-        //debug(s"Priority tree after update $info:\n${priorityTree.print}")
+        priorityTree =
+          priorityTree.insertOrUpdate(info.streamId, info.streamDependency, info.weight, info.exclusiveFlag)
+        // debug(s"Priority tree after update $info:\n${priorityTree.print}")
       }
 
       /** Choose a substream from a set of substream ids that have data available */
       def chooseSubstream(streams: Set[Int]): Int = {
+
         /**
          * Chooses one of the children, returns the chosen stream id (which must be part of `streams` or
          * -1 if no eligible stream was found in that part of the tree).
@@ -64,7 +68,8 @@ private[http2] object StreamPrioritizer {
         }
         val result = chooseFromChildren(priorityTree.rootNode)
         if (result == -1)
-          throw new RuntimeException(s"Couldn't find one of the streams [${streams.toSeq.sorted.mkString(", ")}] in priority tree\n${priorityTree.print}")
+          throw new RuntimeException(
+            s"Couldn't find one of the streams [${streams.toSeq.sorted.mkString(", ")}] in priority tree\n${priorityTree.print}")
 
         result
       }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/TelemetrySpi.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/TelemetrySpi.scala
index 725ec1727..0b9b33d1d 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/TelemetrySpi.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/TelemetrySpi.scala
@@ -35,7 +35,9 @@ private[http] object TelemetrySpi {
           .get
       } catch {
         case ex: Throwable =>
-          system.log.debug("{} references a class that could not be instantiated ({}) falling back to no-op implementation", fqcn, ex.toString)
+          system.log.debug(
+            "{} references a class that could not be instantiated ({}) falling back to no-op implementation", fqcn,
+            ex.toString)
           NoOpTelemetry
       }
     }
@@ -57,6 +59,7 @@ object TelemetryAttributes {
  */
 @InternalStableApi
 trait TelemetrySpi {
+
   /**
    * Flow to intercept server connections. When run the flow will have the ClientMeta attribute set.
    */
@@ -79,7 +82,8 @@ trait TelemetrySpi {
 @InternalApi
 private[http] object NoOpTelemetry extends TelemetrySpi {
   override def client: BidiFlow[HttpRequest, HttpRequest, HttpResponse, HttpResponse, NotUsed] = BidiFlow.identity
-  override def serverBinding: Flow[Tcp.IncomingConnection, Tcp.IncomingConnection, NotUsed] = Flow[Tcp.IncomingConnection]
-  override def serverConnection: BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, NotUsed] = BidiFlow.identity
+  override def serverBinding: Flow[Tcp.IncomingConnection, Tcp.IncomingConnection, NotUsed] =
+    Flow[Tcp.IncomingConnection]
+  override def serverConnection: BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, NotUsed] =
+    BidiFlow.identity
 }
-
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/PersistentConnection.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/PersistentConnection.scala
index 9f2b8cc2f..1118e6daa 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/PersistentConnection.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/PersistentConnection.scala
@@ -46,11 +46,13 @@ private[http2] object PersistentConnection {
    *  * generate error responses with 502 status code
    *  * custom attribute contains internal error information
    */
-  def managedConnection(connectionFlow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]], settings: Http2ClientSettings): Flow[HttpRequest, HttpResponse, NotUsed] =
-    Flow.fromGraph(new Stage(connectionFlow, settings.maxPersistentAttempts match {
-      case 0 => None
-      case n => Some(n)
-    }, settings.baseConnectionBackoff, settings.maxConnectionBackoff))
+  def managedConnection(connectionFlow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]],
+      settings: Http2ClientSettings): Flow[HttpRequest, HttpResponse, NotUsed] =
+    Flow.fromGraph(new Stage(connectionFlow,
+      settings.maxPersistentAttempts match {
+        case 0 => None
+        case n => Some(n)
+      }, settings.baseConnectionBackoff, settings.maxConnectionBackoff))
 
   private class AssociationTag extends RequestResponseAssociation
   private val associationTagKey = AttributeKey[AssociationTag]("PersistentConnection.associationTagKey")
@@ -59,187 +61,194 @@ private[http2] object PersistentConnection {
       StatusCodes.BadGateway,
       entity = "The server closed the connection before delivering a response.")
 
-  private class Stage(connectionFlow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]], maxAttempts: Option[Int], baseEmbargo: FiniteDuration, _maxBackoff: FiniteDuration) extends GraphStage[FlowShape[HttpRequest, HttpResponse]] {
+  private class Stage(connectionFlow: Flow[HttpRequest, HttpResponse, Future[OutgoingConnection]],
+      maxAttempts: Option[Int], baseEmbargo: FiniteDuration, _maxBackoff: FiniteDuration)
+      extends GraphStage[FlowShape[HttpRequest, HttpResponse]] {
     val requestIn = Inlet[HttpRequest]("PersistentConnection.requestIn")
     val responseOut = Outlet[HttpResponse]("PersistentConnection.responseOut")
     val maxBaseEmbargo = _maxBackoff / 2 // because we'll add a random component of the same size to the base
 
     val shape: FlowShape[HttpRequest, HttpResponse] = FlowShape(requestIn, responseOut)
-    override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new TimerGraphStageLogic(shape) with StageLogging {
-      become(Unconnected)
+    override def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+      new TimerGraphStageLogic(shape) with StageLogging {
+        become(Unconnected)
 
-      def become(state: State): Unit = setHandlers(requestIn, responseOut, state)
+        def become(state: State): Unit = setHandlers(requestIn, responseOut, state)
 
-      trait State extends InHandler with OutHandler
-      object Unconnected extends State {
-        override def onPush(): Unit = connect(maxAttempts, Duration.Zero)
-        override def onPull(): Unit =
-          if (!isAvailable(requestIn) && !hasBeenPulled(requestIn)) // requestIn might already have been pulled when we failed and went back to Unconnected
-            pull(requestIn)
-      }
-
-      def connect(connectsLeft: Option[Int], lastEmbargo: FiniteDuration): Unit = {
-        val requestOut = new SubSourceOutlet[HttpRequest]("PersistentConnection.requestOut")
-        val responseIn = new SubSinkInlet[HttpResponse]("PersistentConnection.responseIn")
-        val connection = Promise[OutgoingConnection]()
+        trait State extends InHandler with OutHandler
+        object Unconnected extends State {
+          override def onPush(): Unit = connect(maxAttempts, Duration.Zero)
+          override def onPull(): Unit =
+            if (!isAvailable(requestIn) && !hasBeenPulled(requestIn)) // requestIn might already have been pulled when we failed and went back to Unconnected
+              pull(requestIn)
+        }
 
-        become(new Connecting(connection.future, requestOut, responseIn, connectsLeft.map(_ - 1), lastEmbargo))
+        def connect(connectsLeft: Option[Int], lastEmbargo: FiniteDuration): Unit = {
+          val requestOut = new SubSourceOutlet[HttpRequest]("PersistentConnection.requestOut")
+          val responseIn = new SubSinkInlet[HttpResponse]("PersistentConnection.responseIn")
+          val connection = Promise[OutgoingConnection]()
 
-        connection.completeWith(Source.fromGraph(requestOut.source)
-          .viaMat(connectionFlow)(Keep.right)
-          .toMat(responseIn.sink)(Keep.left)
-          .run()(subFusingMaterializer))
-      }
+          become(new Connecting(connection.future, requestOut, responseIn, connectsLeft.map(_ - 1), lastEmbargo))
 
-      class Connecting(
-        connected:    Future[OutgoingConnection],
-        requestOut:   SubSourceOutlet[HttpRequest],
-        responseIn:   SubSinkInlet[HttpResponse],
-        connectsLeft: Option[Int],
-        lastEmbargo:  FiniteDuration
-      ) extends State {
-        connected.onComplete({
-          case Success(_) =>
-            onConnected.invoke(())
-          case Failure(cause) =>
-            onFailed.invoke(cause)
-        })(ExecutionContexts.parasitic)
-
-        var requestOutPulled = false
-        requestOut.setHandler(new OutHandler {
-          override def onPull(): Unit =
-            requestOutPulled = true
-          override def onDownstreamFinish(): Unit = ()
-        })
-        responseIn.setHandler(new InHandler {
-          override def onPush(): Unit = throw new IllegalStateException("no response push expected while connecting")
-          override def onUpstreamFinish(): Unit = ()
-          override def onUpstreamFailure(ex: Throwable): Unit = ()
-        })
-
-        override def onPush(): Unit = () // Pull might have happened before the connection failed. Element is kept in slot.
-
-        override def onPull(): Unit = {
-          if (!isAvailable(requestIn) && !hasBeenPulled(requestIn)) // requestIn might already have been pulled when we failed and went back to Unconnected
-            pull(requestIn)
+          connection.completeWith(Source.fromGraph(requestOut.source)
+            .viaMat(connectionFlow)(Keep.right)
+            .toMat(responseIn.sink)(Keep.left)
+            .run()(subFusingMaterializer))
         }
 
-        val onConnected = getAsyncCallback[Unit] { _ =>
-          val newState = new Connected(requestOut, responseIn)
-          become(newState)
-          if (requestOutPulled) {
-            if (isAvailable(requestIn)) newState.dispatchRequest(grab(requestIn))
-            else if (!hasBeenPulled(requestIn)) pull(requestIn)
+        class Connecting(
+            connected: Future[OutgoingConnection],
+            requestOut: SubSourceOutlet[HttpRequest],
+            responseIn: SubSinkInlet[HttpResponse],
+            connectsLeft: Option[Int],
+            lastEmbargo: FiniteDuration) extends State {
+          connected.onComplete {
+            case Success(_) =>
+              onConnected.invoke(())
+            case Failure(cause) =>
+              onFailed.invoke(cause)
+          }(ExecutionContexts.parasitic)
+
+          var requestOutPulled = false
+          requestOut.setHandler(new OutHandler {
+            override def onPull(): Unit =
+              requestOutPulled = true
+            override def onDownstreamFinish(): Unit = ()
+          })
+          responseIn.setHandler(new InHandler {
+            override def onPush(): Unit = throw new IllegalStateException("no response push expected while connecting")
+            override def onUpstreamFinish(): Unit = ()
+            override def onUpstreamFailure(ex: Throwable): Unit = ()
+          })
+
+          override def onPush(): Unit = () // Pull might have happened before the connection failed. Element is kept in slot.
+
+          override def onPull(): Unit = {
+            if (!isAvailable(requestIn) && !hasBeenPulled(requestIn)) // requestIn might already have been pulled when we failed and went back to Unconnected
+              pull(requestIn)
           }
-        }
-        val onFailed = getAsyncCallback[Throwable] { cause =>
-          // If the materialized value is failed, then the stream should be broken by design.
-          // Nevertheless also kick our ends of the stream.
-          responseIn.cancel()
-          requestOut.fail(new StreamTcpException("connection broken"))
-
-          if (connectsLeft.contains(0)) {
-            failStage(new RuntimeException(s"Connection failed after $maxAttempts attempts", cause))
-          } else {
-            setHandler(requestIn, Unconnected)
-            if (baseEmbargo == Duration.Zero) {
-              log.info(s"Connection attempt failed: ${cause.getMessage}. Trying to connect again${connectsLeft.map(n => s" ($n attempts left)").getOrElse("")}.")
-              connect(connectsLeft, Duration.Zero)
+
+          val onConnected = getAsyncCallback[Unit] { _ =>
+            val newState = new Connected(requestOut, responseIn)
+            become(newState)
+            if (requestOutPulled) {
+              if (isAvailable(requestIn)) newState.dispatchRequest(grab(requestIn))
+              else if (!hasBeenPulled(requestIn)) pull(requestIn)
+            }
+          }
+          val onFailed = getAsyncCallback[Throwable] { cause =>
+            // If the materialized value is failed, then the stream should be broken by design.
+            // Nevertheless also kick our ends of the stream.
+            responseIn.cancel()
+            requestOut.fail(new StreamTcpException("connection broken"))
+
+            if (connectsLeft.contains(0)) {
+              failStage(new RuntimeException(s"Connection failed after $maxAttempts attempts", cause))
             } else {
-              val embargo = lastEmbargo match {
-                case Duration.Zero => baseEmbargo
-                case otherValue    => (otherValue * 2).min(maxBaseEmbargo)
+              setHandler(requestIn, Unconnected)
+              if (baseEmbargo == Duration.Zero) {
+                log.info(s"Connection attempt failed: ${cause.getMessage}. Trying to connect again${connectsLeft.map(
+                    n => s" ($n attempts left)").getOrElse("")}.")
+                connect(connectsLeft, Duration.Zero)
+              } else {
+                val embargo = lastEmbargo match {
+                  case Duration.Zero => baseEmbargo
+                  case otherValue    => (otherValue * 2).min(maxBaseEmbargo)
+                }
+                val minMillis = embargo.toMillis
+                val maxMillis = minMillis * 2
+                val backoff = ThreadLocalRandom.current().nextLong(minMillis, maxMillis).millis
+                log.info(
+                  s"Connection attempt failed: ${cause.getMessage}. Trying to connect again after backoff ${PrettyDuration.format(
+                      backoff)} ${connectsLeft.map(n => s" ($n attempts left)").getOrElse("")}.")
+                scheduleOnce(EmbargoEnded(connectsLeft, embargo), backoff)
               }
-              val minMillis = embargo.toMillis
-              val maxMillis = minMillis * 2
-              val backoff = ThreadLocalRandom.current().nextLong(minMillis, maxMillis).millis
-              log.info(s"Connection attempt failed: ${cause.getMessage}. Trying to connect again after backoff ${PrettyDuration.format(backoff)} ${connectsLeft.map(n => s" ($n attempts left)").getOrElse("")}.")
-              scheduleOnce(EmbargoEnded(connectsLeft, embargo), backoff)
             }
           }
         }
-      }
 
-      override def onTimer(timerKey: Any): Unit = {
-        timerKey match {
-          case EmbargoEnded(connectsLeft, nextEmbargo) =>
-            log.debug("Reconnecting after backoff")
-            connect(connectsLeft, nextEmbargo)
+        override def onTimer(timerKey: Any): Unit = {
+          timerKey match {
+            case EmbargoEnded(connectsLeft, nextEmbargo) =>
+              log.debug("Reconnecting after backoff")
+              connect(connectsLeft, nextEmbargo)
+          }
         }
-      }
 
-      class Connected(
-        requestOut: SubSourceOutlet[HttpRequest],
-        responseIn: SubSinkInlet[HttpResponse]
-      ) extends State {
-        private var ongoingRequests: Map[AssociationTag, Map[AttributeKey[_], RequestResponseAssociation]] = Map.empty
-        responseIn.pull()
+        class Connected(
+            requestOut: SubSourceOutlet[HttpRequest],
+            responseIn: SubSinkInlet[HttpResponse]) extends State {
+          private var ongoingRequests: Map[AssociationTag, Map[AttributeKey[_], RequestResponseAssociation]] = Map.empty
+          responseIn.pull()
+
+          requestOut.setHandler(new OutHandler {
+            override def onPull(): Unit =
+              if (!isAvailable(requestIn)) pull(requestIn)
+              else dispatchRequest(grab(requestIn))
+
+            override def onDownstreamFinish(): Unit = onDisconnected()
+          })
+          responseIn.setHandler(new InHandler {
+            override def onPush(): Unit = {
+              val response = responseIn.grab()
+              val tag = response.attribute(associationTagKey).get
+              require(ongoingRequests.contains(tag))
+              ongoingRequests -= tag
+              push(responseOut, response.removeAttribute(associationTagKey))
+            }
 
-        requestOut.setHandler(new OutHandler {
-          override def onPull(): Unit =
-            if (!isAvailable(requestIn)) pull(requestIn)
-            else dispatchRequest(grab(requestIn))
-
-          override def onDownstreamFinish(): Unit = onDisconnected()
-        })
-        responseIn.setHandler(new InHandler {
-          override def onPush(): Unit = {
-            val response = responseIn.grab()
-            val tag = response.attribute(associationTagKey).get
-            require(ongoingRequests.contains(tag))
-            ongoingRequests -= tag
-            push(responseOut, response.removeAttribute(associationTagKey))
+            override def onUpstreamFinish(): Unit = onDisconnected()
+            override def onUpstreamFailure(ex: Throwable): Unit = onDisconnected() // FIXME: log error
+          })
+          def onDisconnected(): Unit = {
+            emitMultiple[HttpResponse](responseOut,
+              ongoingRequests.values.map(errorResponse.withAttributes(_)).toVector,
+              () => setHandler(responseOut, Unconnected))
+            responseIn.cancel()
+            requestOut.fail(new RuntimeException("connection broken"))
+
+            if (isClosed(requestIn)) {
+              // user closed PersistentConnection before and we were waiting for remaining responses
+              completeStage()
+            } else {
+              // become(Unconnected) doesn't work because of using emit
+              // so we need to do it more carefully here
+              setHandler(requestIn, Unconnected)
+              if (isAvailable(responseOut) && !hasBeenPulled(requestIn)) pull(requestIn)
+            }
           }
 
-          override def onUpstreamFinish(): Unit = onDisconnected()
-          override def onUpstreamFailure(ex: Throwable): Unit = onDisconnected() // FIXME: log error
-        })
-        def onDisconnected(): Unit = {
-          emitMultiple[HttpResponse](responseOut, ongoingRequests.values.map(errorResponse.withAttributes(_)).toVector, () => setHandler(responseOut, Unconnected))
-          responseIn.cancel()
-          requestOut.fail(new RuntimeException("connection broken"))
-
-          if (isClosed(requestIn)) {
-            // user closed PersistentConnection before and we were waiting for remaining responses
-            completeStage()
-          } else {
-            // become(Unconnected) doesn't work because of using emit
-            // so we need to do it more carefully here
-            setHandler(requestIn, Unconnected)
-            if (isAvailable(responseOut) && !hasBeenPulled(requestIn)) pull(requestIn)
+          def dispatchRequest(req: HttpRequest): Unit = {
+            val tag = new AssociationTag
+            // Some cross-compilation woes here:
+            // Explicit type ascription is needed to make both 2.12 and 2.13 compile.
+            ongoingRequests = ongoingRequests.updated(tag,
+              req.attributes.collect({
+                case (key, value: RequestResponseAssociation) => key -> value
+              }: PartialFunction[(AttributeKey[_], Any), (AttributeKey[_], RequestResponseAssociation)]))
+            requestOut.push(req.addAttribute(associationTagKey, tag))
           }
-        }
 
-        def dispatchRequest(req: HttpRequest): Unit = {
-          val tag = new AssociationTag
-          // Some cross-compilation woes here:
-          // Explicit type ascription is needed to make both 2.12 and 2.13 compile.
-          ongoingRequests = ongoingRequests.updated(tag, req.attributes.collect({
-            case (key, value: RequestResponseAssociation) => key -> value
-          }: PartialFunction[(AttributeKey[_], Any), (AttributeKey[_], RequestResponseAssociation)]))
-          requestOut.push(req.addAttribute(associationTagKey, tag))
-        }
-
-        override def onPush(): Unit = dispatchRequest(grab(requestIn))
-        override def onPull(): Unit = responseIn.pull()
+          override def onPush(): Unit = dispatchRequest(grab(requestIn))
+          override def onPull(): Unit = responseIn.pull()
 
-        // onUpstreamFinish expects "reasonable behavior" from downstream stages, i.e. that
-        // the downstream stage will eventually close all remaining inputs/outputs. Note
-        // that the PersistentConnection is often used in combination with HTTP/2 connections
-        // which to timeout if the stage completion stalls.
-        override def onUpstreamFinish(): Unit = requestOut.complete()
+          // onUpstreamFinish expects "reasonable behavior" from downstream stages, i.e. that
+          // the downstream stage will eventually close all remaining inputs/outputs. Note
+          // that the PersistentConnection is often used in combination with HTTP/2 connections
+          // which to timeout if the stage completion stalls.
+          override def onUpstreamFinish(): Unit = requestOut.complete()
 
-        override def onUpstreamFailure(ex: Throwable): Unit = {
-          requestOut.fail(ex)
-          responseIn.cancel()
-          failStage(ex)
-        }
-        override def onDownstreamFinish(): Unit = {
-          requestOut.complete()
-          responseIn.cancel()
-          super.onDownstreamFinish()
+          override def onUpstreamFailure(ex: Throwable): Unit = {
+            requestOut.fail(ex)
+            responseIn.cancel()
+            failStage(ex)
+          }
+          override def onDownstreamFinish(): Unit = {
+            requestOut.complete()
+            responseIn.cancel()
+            super.onDownstreamFinish()
+          }
         }
       }
-    }
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/ResponseParsing.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/ResponseParsing.scala
index f9b8cda31..7768975d2 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/ResponseParsing.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/client/ResponseParsing.scala
@@ -22,8 +22,8 @@ import scala.collection.immutable.VectorBuilder
 
 @InternalApi
 private[http2] object ResponseParsing {
-  def parseResponse(httpHeaderParser: HttpHeaderParser, settings: ParserSettings, attributes: Attributes): Http2SubStream => HttpResponse = { subStream =>
-
+  def parseResponse(httpHeaderParser: HttpHeaderParser, settings: ParserSettings, attributes: Attributes)
+      : Http2SubStream => HttpResponse = { subStream =>
     val tlsSessionInfoHeader: Option[`Tls-Session-Info`] =
       if (settings.includeTlsSessionInfoHeader) {
         attributes.get[HttpAttributes.TLSSessionInfo].map(sslSessionInfo =>
@@ -38,13 +38,12 @@ private[http2] object ResponseParsing {
 
     @tailrec
     def rec(
-      remainingHeaders:  Seq[(String, AnyRef)],
-      status:            StatusCode                = null,
-      contentType:       OptionVal[ContentType]    = OptionVal.None,
-      contentLength:     Long                      = -1,
-      seenRegularHeader: Boolean                   = false,
-      headers:           VectorBuilder[HttpHeader] = new VectorBuilder[HttpHeader]
-    ): HttpResponse =
+        remainingHeaders: Seq[(String, AnyRef)],
+        status: StatusCode = null,
+        contentType: OptionVal[ContentType] = OptionVal.None,
+        contentLength: Long = -1,
+        seenRegularHeader: Boolean = false,
+        headers: VectorBuilder[HttpHeader] = new VectorBuilder[HttpHeader]): HttpResponse =
       if (remainingHeaders.isEmpty) {
         // https://httpwg.org/specs/rfc7540.html#rfc.section.8.1.2.4: these pseudo header fields are mandatory for a response
         checkRequiredPseudoHeader(":status", status)
@@ -58,8 +57,7 @@ private[http2] object ResponseParsing {
           status = status,
           headers = headers.result(),
           entity = entity,
-          HttpProtocols.`HTTP/2.0`
-        ).withAttributes(subStream.correlationAttributes)
+          HttpProtocols.`HTTP/2.0`).withAttributes(subStream.correlationAttributes)
         sslSessionAttribute match {
           case Some(sslSession) => response.addAttribute(AttributeKeys.sslSession, SslSessionInfo(sslSession))
           case None             => response
@@ -73,20 +71,24 @@ private[http2] object ResponseParsing {
 
         case ("content-type", contentTypeValue: ContentType) =>
           if (contentType.isEmpty)
-            rec(remainingHeaders.tail, status, OptionVal.Some(contentTypeValue), contentLength, seenRegularHeader, headers)
+            rec(remainingHeaders.tail, status, OptionVal.Some(contentTypeValue), contentLength, seenRegularHeader,
+              headers)
           else
             malformedRequest("HTTP message must not contain more than one content-type header")
 
         case ("content-type", ct: String) =>
           if (contentType.isEmpty) {
-            val contentTypeValue = ContentType.parse(ct).right.getOrElse(malformedRequest(s"Invalid content-type: '$ct'"))
-            rec(remainingHeaders.tail, status, OptionVal.Some(contentTypeValue), contentLength, seenRegularHeader, headers)
+            val contentTypeValue =
+              ContentType.parse(ct).right.getOrElse(malformedRequest(s"Invalid content-type: '$ct'"))
+            rec(remainingHeaders.tail, status, OptionVal.Some(contentTypeValue), contentLength, seenRegularHeader,
+              headers)
           } else malformedRequest("HTTP message must not contain more than one content-type header")
 
         case ("content-length", length: String) =>
           if (contentLength == -1) {
             val contentLengthValue = length.toLong
-            if (contentLengthValue < 0) malformedRequest("HTTP message must not contain a negative content-length header")
+            if (contentLengthValue < 0)
+              malformedRequest("HTTP message must not contain a negative content-length header")
             rec(remainingHeaders.tail, status, contentType, contentLengthValue, seenRegularHeader, headers)
           } else malformedRequest("HTTP message must not contain more than one content-length header")
 
@@ -94,12 +96,14 @@ private[http2] object ResponseParsing {
           malformedRequest(s"Unexpected pseudo-header '$name' in response")
 
         case (_, httpHeader: HttpHeader) =>
-          rec(remainingHeaders.tail, status, contentType, contentLength, seenRegularHeader = true, headers += httpHeader)
+          rec(remainingHeaders.tail, status, contentType, contentLength, seenRegularHeader = true,
+            headers += httpHeader)
 
         case (name, value: String) =>
           val httpHeader = parseHeaderPair(httpHeaderParser, name, value)
           validateHeader(httpHeader)
-          rec(remainingHeaders.tail, status, contentType, contentLength, seenRegularHeader = true, headers += httpHeader)
+          rec(remainingHeaders.tail, status, contentType, contentLength, seenRegularHeader = true,
+            headers += httpHeader)
       }
 
     rec(subStream.initialHeaders.keyValuePairs)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/FrameRenderer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/FrameRenderer.scala
index dde517a9c..3a221a559 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/FrameRenderer.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/FrameRenderer.scala
@@ -25,8 +25,7 @@ private[http2] object FrameRenderer {
           8 + debug.length,
           Http2Protocol.FrameType.GOAWAY,
           Http2Protocol.Flags.NO_FLAGS,
-          Http2Protocol.NoStreamId
-        )
+          Http2Protocol.NoStreamId)
           .putInt32(lastStreamId)
           .putInt32(errorCode.id)
           // appends debug data, if any
@@ -39,8 +38,7 @@ private[http2] object FrameRenderer {
           payload.length,
           Http2Protocol.FrameType.DATA,
           Http2Protocol.Flags.END_STREAM.ifSet(endStream),
-          streamId
-        )
+          streamId)
           .put(payload)
           .build()
       case HeadersFrame(streamId, endStream, endHeaders, headerBlockFragment, prioInfo) =>
@@ -48,10 +46,9 @@ private[http2] object FrameRenderer {
           (if (prioInfo.isDefined) 5 else 0) + headerBlockFragment.length,
           Http2Protocol.FrameType.HEADERS,
           Http2Protocol.Flags.END_STREAM.ifSet(endStream) |
-            Http2Protocol.Flags.END_HEADERS.ifSet(endHeaders) |
-            Http2Protocol.Flags.PRIORITY.ifSet(prioInfo.isDefined),
-          streamId
-        )
+          Http2Protocol.Flags.END_HEADERS.ifSet(endHeaders) |
+          Http2Protocol.Flags.PRIORITY.ifSet(prioInfo.isDefined),
+          streamId)
           .putPriorityInfo(prioInfo)
           .put(headerBlockFragment)
           .build()
@@ -61,8 +58,7 @@ private[http2] object FrameRenderer {
           4,
           Http2Protocol.FrameType.WINDOW_UPDATE,
           Http2Protocol.Flags.NO_FLAGS,
-          streamId
-        )
+          streamId)
           .putInt32(windowSizeIncrement)
           .build()
 
@@ -71,8 +67,7 @@ private[http2] object FrameRenderer {
           payload.length,
           Http2Protocol.FrameType.CONTINUATION,
           Http2Protocol.Flags.END_HEADERS.ifSet(endHeaders),
-          streamId
-        )
+          streamId)
           .put(payload)
           .build()
 
@@ -81,8 +76,7 @@ private[http2] object FrameRenderer {
           settings.length * 6,
           Http2Protocol.FrameType.SETTINGS,
           Http2Protocol.Flags.NO_FLAGS,
-          Http2Protocol.NoStreamId
-        )
+          Http2Protocol.NoStreamId)
 
         @tailrec def renderNext(remaining: Seq[Setting]): Unit =
           remaining match {
@@ -102,8 +96,7 @@ private[http2] object FrameRenderer {
           0,
           Http2Protocol.FrameType.SETTINGS,
           Http2Protocol.Flags.ACK,
-          Http2Protocol.NoStreamId
-        )
+          Http2Protocol.NoStreamId)
           .build()
 
       case PingFrame(ack, data) =>
@@ -111,8 +104,7 @@ private[http2] object FrameRenderer {
           data.length,
           Http2Protocol.FrameType.PING,
           Http2Protocol.Flags.ACK.ifSet(ack),
-          Http2Protocol.NoStreamId
-        )
+          Http2Protocol.NoStreamId)
           .put(data)
           .build()
 
@@ -121,8 +113,7 @@ private[http2] object FrameRenderer {
           4,
           Http2Protocol.FrameType.RST_STREAM,
           Http2Protocol.Flags.NO_FLAGS,
-          streamId
-        )
+          streamId)
           .putInt32(errorCode.id)
           .build()
 
@@ -131,8 +122,7 @@ private[http2] object FrameRenderer {
           4 + headerBlockFragment.length,
           Http2Protocol.FrameType.PUSH_PROMISE,
           Http2Protocol.Flags.END_HEADERS.ifSet(endHeaders),
-          streamId
-        )
+          streamId)
           .putInt32(promisedStreamId)
           .put(headerBlockFragment)
           .build()
@@ -142,8 +132,7 @@ private[http2] object FrameRenderer {
           5,
           Http2Protocol.FrameType.PRIORITY,
           Http2Protocol.Flags.NO_FLAGS,
-          streamId
-        )
+          streamId)
           .putPriorityInfo(frame)
           .build()
       case _ => throw new IllegalStateException(s"Unexpected frame type ${frame.frameTypeName}.")
@@ -155,7 +144,8 @@ private[http2] object FrameRenderer {
       .build()
 
   private object Frame {
-    def apply(payloadSize: Int, tpe: FrameType, flags: ByteFlag, streamId: Int): Frame = new Frame(payloadSize, tpe, flags, streamId)
+    def apply(payloadSize: Int, tpe: FrameType, flags: ByteFlag, streamId: Int): Frame =
+      new Frame(payloadSize, tpe, flags, streamId)
   }
   private class Frame(payloadSize: Int, tpe: FrameType, flags: ByteFlag, streamId: Int) {
     private val targetSize = 9 + payloadSize
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/Http2FrameParsing.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/Http2FrameParsing.scala
index 7de4beab3..8e50f65f0 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/Http2FrameParsing.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/framing/Http2FrameParsing.scala
@@ -41,7 +41,8 @@ private[http] object Http2FrameParsing {
     readSettings(Nil)
   }
 
-  def parseFrame(tpe: FrameType, flags: ByteFlag, streamId: Int, payload: ByteReader, log: LoggingAdapter): FrameEvent = {
+  def parseFrame(
+      tpe: FrameType, flags: ByteFlag, streamId: Int, payload: ByteReader, log: LoggingAdapter): FrameEvent = {
     // TODO: add @switch? seems non-trivial for now
     tpe match {
       case FrameType.GOAWAY =>
@@ -55,16 +56,16 @@ private[http] object Http2FrameParsing {
         val priority = Flags.PRIORITY.isSet(flags)
 
         val paddingLength =
-          if (pad) payload.readByte() & 0xff
+          if (pad) payload.readByte() & 0xFF
           else 0
 
         val priorityInfo =
           if (priority) {
             val dependencyAndE = payload.readIntBE()
-            val weight = payload.readByte() & 0xff
+            val weight = payload.readByte() & 0xFF
 
             val exclusiveFlag = (dependencyAndE >>> 31) == 1 // most significant bit for exclusive flag
-            val dependencyId = dependencyAndE & 0x7fffffff // remaining 31 bits for the dependency part
+            val dependencyId = dependencyAndE & 0x7FFFFFFF // remaining 31 bits for the dependency part
             Http2Compliance.requireNoSelfDependency(streamId, dependencyId)
             Some(PriorityFrame(streamId, exclusiveFlag, dependencyId, weight))
           } else
@@ -77,7 +78,7 @@ private[http] object Http2FrameParsing {
         val endStream = Flags.END_STREAM.isSet(flags)
 
         val paddingLength =
-          if (pad) payload.readByte() & 0xff
+          if (pad) payload.readByte() & 0xFF
           else 0
 
         DataFrame(streamId, endStream, payload.take(payload.remainingSize - paddingLength))
@@ -89,12 +90,14 @@ private[http] object Http2FrameParsing {
         if (ack) {
           // validate that payload is empty: (6.5)
           if (payload.hasRemaining)
-            throw new Http2Compliance.IllegalPayloadInSettingsAckFrame(payload.remainingSize, s"SETTINGS ACK frame MUST NOT contain payload (spec 6.5)!")
+            throw new Http2Compliance.IllegalPayloadInSettingsAckFrame(payload.remainingSize,
+              s"SETTINGS ACK frame MUST NOT contain payload (spec 6.5)!")
 
           SettingsAckFrame(Nil) // TODO if we were to send out settings, here would be the spot to include the acks for the ones we've sent out
         } else {
 
-          if (payload.remainingSize % 6 != 0) throw new Http2Compliance.IllegalPayloadLengthInSettingsFrame(payload.remainingSize, "SETTINGS payload MUST be a multiple of multiple of 6 octets")
+          if (payload.remainingSize % 6 != 0) throw new Http2Compliance.IllegalPayloadLengthInSettingsFrame(
+            payload.remainingSize, "SETTINGS payload MUST be a multiple of multiple of 6 octets")
           SettingsFrame(readSettings(payload, log))
         }
 
@@ -127,8 +130,8 @@ private[http] object Http2FrameParsing {
         Http2Compliance.requireFrameSize(payload.remainingSize, 5)
         val streamDependency = payload.readIntBE() // whole word
         val exclusiveFlag = (streamDependency >>> 31) == 1 // most significant bit for exclusive flag
-        val dependencyId = streamDependency & 0x7fffffff // remaining 31 bits for the dependency part
-        val priority = payload.readByte() & 0xff
+        val dependencyId = streamDependency & 0x7FFFFFFF // remaining 31 bits for the dependency part
+        val priority = payload.readByte() & 0xFF
         Http2Compliance.requireNoSelfDependency(streamId, dependencyId)
         PriorityFrame(streamId, exclusiveFlag, dependencyId, priority)
 
@@ -137,7 +140,7 @@ private[http] object Http2FrameParsing {
         val endHeaders = Flags.END_HEADERS.isSet(flags)
 
         val paddingLength =
-          if (pad) payload.readByte() & 0xff
+          if (pad) payload.readByte() & 0xFF
           else 0
 
         val promisedStreamId = payload.readIntBE()
@@ -152,7 +155,8 @@ private[http] object Http2FrameParsing {
 
 /** INTERNAL API */
 @InternalApi
-private[http2] class Http2FrameParsing(shouldReadPreface: Boolean, log: LoggingAdapter) extends ByteStringParser[FrameEvent] {
+private[http2] class Http2FrameParsing(
+    shouldReadPreface: Boolean, log: LoggingAdapter) extends ByteStringParser[FrameEvent] {
   import ByteStringParser._
   import Http2FrameParsing._
 
@@ -195,4 +199,3 @@ private[http2] class Http2FrameParsing(shouldReadPreface: Boolean, log: LoggingA
 
     }
 }
-
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderCompression.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderCompression.scala
index d4ddd2657..c6f199e42 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderCompression.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderCompression.scala
@@ -25,63 +25,68 @@ private[http2] object HeaderCompression extends GraphStage[FlowShape[FrameEvent,
 
   val shape = FlowShape(eventsIn, eventsOut)
 
-  def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with StageLogging with InHandler with OutHandler { logic =>
-    setHandlers(eventsIn, eventsOut, this)
-    private val currentMaxFrameSize = Http2Protocol.InitialMaxFrameSize
+  def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+    new GraphStageLogic(shape) with StageLogging with InHandler with OutHandler { logic =>
+      setHandlers(eventsIn, eventsOut, this)
+      private val currentMaxFrameSize = Http2Protocol.InitialMaxFrameSize
 
-    val encoder = new akka.http.shaded.com.twitter.hpack.Encoder(Http2Protocol.InitialMaxHeaderTableSize)
-    val os = new ByteArrayOutputStream(128)
+      val encoder = new akka.http.shaded.com.twitter.hpack.Encoder(Http2Protocol.InitialMaxHeaderTableSize)
+      val os = new ByteArrayOutputStream(128)
 
-    def onPull(): Unit = pull(eventsIn)
-    def onPush(): Unit = grab(eventsIn) match {
-      case ack @ SettingsAckFrame(s) =>
-        applySettings(s)
-        push(eventsOut, ack)
-      case ParsedHeadersFrame(streamId, endStream, kvs, prioInfo) =>
-        // When ending the stream without any payload, use a DATA frame rather than
-        // a HEADERS frame to work around https://github.com/golang/go/issues/47851.
-        if (endStream && kvs.isEmpty) push(eventsOut, DataFrame(streamId, endStream, ByteString.empty))
-        else {
-          kvs.foreach {
-            case (key, value: String) =>
-              encoder.encodeHeader(os, key, value, false)
-            case (key, value) =>
-              throw new IllegalStateException(s"Didn't expect key-value-pair [$key] -> [$value](${value.getClass}) here.")
-          }
-          val result = ByteString.fromArrayUnsafe(os.toByteArray) // BAOS.toByteArray always creates a copy
-          os.reset()
-          if (result.size <= currentMaxFrameSize) push(eventsOut, HeadersFrame(streamId, endStream, endHeaders = true, result, prioInfo))
+      def onPull(): Unit = pull(eventsIn)
+      def onPush(): Unit = grab(eventsIn) match {
+        case ack @ SettingsAckFrame(s) =>
+          applySettings(s)
+          push(eventsOut, ack)
+        case ParsedHeadersFrame(streamId, endStream, kvs, prioInfo) =>
+          // When ending the stream without any payload, use a DATA frame rather than
+          // a HEADERS frame to work around https://github.com/golang/go/issues/47851.
+          if (endStream && kvs.isEmpty) push(eventsOut, DataFrame(streamId, endStream, ByteString.empty))
           else {
-            val first = HeadersFrame(streamId, endStream, endHeaders = false, result.take(currentMaxFrameSize), prioInfo)
+            kvs.foreach {
+              case (key, value: String) =>
+                encoder.encodeHeader(os, key, value, false)
+              case (key, value) =>
+                throw new IllegalStateException(
+                  s"Didn't expect key-value-pair [$key] -> [$value](${value.getClass}) here.")
+            }
+            val result = ByteString.fromArrayUnsafe(os.toByteArray) // BAOS.toByteArray always creates a copy
+            os.reset()
+            if (result.size <= currentMaxFrameSize)
+              push(eventsOut, HeadersFrame(streamId, endStream, endHeaders = true, result, prioInfo))
+            else {
+              val first =
+                HeadersFrame(streamId, endStream, endHeaders = false, result.take(currentMaxFrameSize), prioInfo)
 
-            push(eventsOut, first)
-            setHandler(eventsOut, new OutHandler {
-              private var remainingData = result.drop(currentMaxFrameSize)
+              push(eventsOut, first)
+              setHandler(eventsOut,
+                new OutHandler {
+                  private var remainingData = result.drop(currentMaxFrameSize)
 
-              def onPull(): Unit = {
-                val thisFragment = remainingData.take(currentMaxFrameSize)
-                val rest = remainingData.drop(currentMaxFrameSize)
-                val last = rest.isEmpty
+                  def onPull(): Unit = {
+                    val thisFragment = remainingData.take(currentMaxFrameSize)
+                    val rest = remainingData.drop(currentMaxFrameSize)
+                    val last = rest.isEmpty
 
-                push(eventsOut, ContinuationFrame(streamId, endHeaders = last, thisFragment))
-                if (last) setHandler(eventsOut, logic)
-                else remainingData = rest
-              }
-            })
+                    push(eventsOut, ContinuationFrame(streamId, endHeaders = last, thisFragment))
+                    if (last) setHandler(eventsOut, logic)
+                    else remainingData = rest
+                  }
+                })
+            }
           }
+        case x => push(eventsOut, x)
+      }
+
+      def applySettings(s: immutable.Seq[Setting]): Unit =
+        s.foreach {
+          case Setting(SettingIdentifier.SETTINGS_HEADER_TABLE_SIZE, size) =>
+            log.debug("Applied SETTINGS_HEADER_TABLE_SIZE({}) in header compression", size)
+            // 'size' is strictly spoken unsigned, but the encoder is allowed to
+            // pick any size equal to or less than this value (6.5.2)
+            if (size >= 0) encoder.setMaxHeaderTableSize(os, size)
+            else encoder.setMaxHeaderTableSize(os, Int.MaxValue)
+          case _ => // ignore, not applicable to this stage
         }
-      case x => push(eventsOut, x)
     }
-
-    def applySettings(s: immutable.Seq[Setting]): Unit =
-      s foreach {
-        case Setting(SettingIdentifier.SETTINGS_HEADER_TABLE_SIZE, size) =>
-          log.debug("Applied SETTINGS_HEADER_TABLE_SIZE({}) in header compression", size)
-          // 'size' is strictly spoken unsigned, but the encoder is allowed to
-          // pick any size equal to or less than this value (6.5.2)
-          if (size >= 0) encoder.setMaxHeaderTableSize(os, size)
-          else encoder.setMaxHeaderTableSize(os, Int.MaxValue)
-        case _ => // ignore, not applicable to this stage
-      }
-  }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderDecompression.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderDecompression.scala
index c96b28d96..5d96900ad 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderDecompression.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/HeaderDecompression.scala
@@ -27,7 +27,8 @@ import scala.collection.immutable.VectorBuilder
  * Can be used on server and client side.
  */
 @InternalApi
-private[http2] final class HeaderDecompression(masterHeaderParser: HttpHeaderParser, parserSettings: ParserSettings) extends GraphStage[FlowShape[FrameEvent, FrameEvent]] {
+private[http2] final class HeaderDecompression(masterHeaderParser: HttpHeaderParser, parserSettings: ParserSettings)
+    extends GraphStage[FlowShape[FrameEvent, FrameEvent]] {
   val UTF8 = StandardCharsets.UTF_8
   val US_ASCII = StandardCharsets.US_ASCII
 
@@ -36,90 +37,96 @@ private[http2] final class HeaderDecompression(masterHeaderParser: HttpHeaderPar
 
   val shape = FlowShape(eventsIn, eventsOut)
 
-  def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new HandleOrPassOnStage[FrameEvent, FrameEvent](shape) {
-    val httpHeaderParser = masterHeaderParser.createShallowCopy()
-    val decoder = new akka.http.shaded.com.twitter.hpack.Decoder(Http2Protocol.InitialMaxHeaderListSize, Http2Protocol.InitialMaxHeaderTableSize)
-
-    become(Idle)
-
-    // simple state machine
-    // Idle: no ongoing HEADERS parsing
-    // Receiving headers: waiting for CONTINUATION frame
-
-    def parseAndEmit(streamId: Int, endStream: Boolean, payload: ByteString, prioInfo: Option[PriorityFrame]): Unit = {
-      val headers = new VectorBuilder[(String, AnyRef)]
-      object Receiver extends HeaderListener {
-        def addHeader(name: String, value: String, parsed: AnyRef, sensitive: Boolean): AnyRef = {
-          if (parsed ne null) {
-            headers += name -> parsed
-            parsed
-          } else {
-            import Http2HeaderParsing._
-            def handle(parsed: AnyRef): AnyRef = {
+  def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+    new HandleOrPassOnStage[FrameEvent, FrameEvent](shape) {
+      val httpHeaderParser = masterHeaderParser.createShallowCopy()
+      val decoder = new akka.http.shaded.com.twitter.hpack.Decoder(Http2Protocol.InitialMaxHeaderListSize,
+        Http2Protocol.InitialMaxHeaderTableSize)
+
+      become(Idle)
+
+      // simple state machine
+      // Idle: no ongoing HEADERS parsing
+      // Receiving headers: waiting for CONTINUATION frame
+
+      def parseAndEmit(
+          streamId: Int, endStream: Boolean, payload: ByteString, prioInfo: Option[PriorityFrame]): Unit = {
+        val headers = new VectorBuilder[(String, AnyRef)]
+        object Receiver extends HeaderListener {
+          def addHeader(name: String, value: String, parsed: AnyRef, sensitive: Boolean): AnyRef = {
+            if (parsed ne null) {
               headers += name -> parsed
               parsed
-            }
-
-            name match {
-              case "content-type"   => handle(ContentType.parse(name, value, parserSettings))
-              case ":authority"     => handle(Authority.parse(name, value, parserSettings))
-              case ":path"          => handle(PathAndQuery.parse(name, value, parserSettings))
-              case ":method"        => handle(Method.parse(name, value, parserSettings))
-              case ":scheme"        => handle(Scheme.parse(name, value, parserSettings))
-              case "content-length" => handle(ContentLength.parse(name, value, parserSettings))
-              case "cookie"         => handle(Cookie.parse(name, value, parserSettings))
-              case x if x(0) == ':' => handle(value)
-              case _ =>
-                // cannot use OtherHeader.parse because that doesn't has access to header parser
-                val header = parseHeaderPair(httpHeaderParser, name, value)
-                RequestParsing.validateHeader(header)
-                handle(header)
+            } else {
+              import Http2HeaderParsing._
+              def handle(parsed: AnyRef): AnyRef = {
+                headers += name -> parsed
+                parsed
+              }
+
+              name match {
+                case "content-type"   => handle(ContentType.parse(name, value, parserSettings))
+                case ":authority"     => handle(Authority.parse(name, value, parserSettings))
+                case ":path"          => handle(PathAndQuery.parse(name, value, parserSettings))
+                case ":method"        => handle(Method.parse(name, value, parserSettings))
+                case ":scheme"        => handle(Scheme.parse(name, value, parserSettings))
+                case "content-length" => handle(ContentLength.parse(name, value, parserSettings))
+                case "cookie"         => handle(Cookie.parse(name, value, parserSettings))
+                case x if x(0) == ':' => handle(value)
+                case _                =>
+                  // cannot use OtherHeader.parse because that doesn't has access to header parser
+                  val header = parseHeaderPair(httpHeaderParser, name, value)
+                  RequestParsing.validateHeader(header)
+                  handle(header)
+              }
             }
           }
         }
+        try {
+          decoder.decode(ByteStringInputStream(payload), Receiver)
+          decoder.endHeaderBlock() // TODO: do we have to check the result here?
+
+          push(eventsOut, ParsedHeadersFrame(streamId, endStream, headers.result(), prioInfo))
+        } catch {
+          case ex: IOException =>
+            // this is signalled by the decoder when it failed, we want to react to this by rendering a GOAWAY frame
+            fail(eventsOut,
+              new Http2Compliance.Http2ProtocolException(ErrorCode.COMPRESSION_ERROR, "Decompression failed."))
+        }
       }
-      try {
-        decoder.decode(ByteStringInputStream(payload), Receiver)
-        decoder.endHeaderBlock() // TODO: do we have to check the result here?
-
-        push(eventsOut, ParsedHeadersFrame(streamId, endStream, headers.result(), prioInfo))
-      } catch {
-        case ex: IOException =>
-          // this is signalled by the decoder when it failed, we want to react to this by rendering a GOAWAY frame
-          fail(eventsOut, new Http2Compliance.Http2ProtocolException(ErrorCode.COMPRESSION_ERROR, "Decompression failed."))
-      }
-    }
 
-    object Idle extends State {
-      val handleEvent: PartialFunction[FrameEvent, Unit] = {
-        case HeadersFrame(streamId, endStream, endHeaders, fragment, prioInfo) =>
-          if (endHeaders) parseAndEmit(streamId, endStream, fragment, prioInfo)
-          else {
-            become(new ReceivingHeaders(streamId, endStream, fragment, prioInfo))
-            pull(eventsIn)
-          }
-        case c: ContinuationFrame =>
-          protocolError(s"Received unexpected continuation frame: $c")
+      object Idle extends State {
+        val handleEvent: PartialFunction[FrameEvent, Unit] = {
+          case HeadersFrame(streamId, endStream, endHeaders, fragment, prioInfo) =>
+            if (endHeaders) parseAndEmit(streamId, endStream, fragment, prioInfo)
+            else {
+              become(new ReceivingHeaders(streamId, endStream, fragment, prioInfo))
+              pull(eventsIn)
+            }
+          case c: ContinuationFrame =>
+            protocolError(s"Received unexpected continuation frame: $c")
 
-        // FIXME: handle SETTINGS frames that change decompression parameters
+          // FIXME: handle SETTINGS frames that change decompression parameters
+        }
       }
-    }
-    class ReceivingHeaders(streamId: Int, endStream: Boolean, initiallyReceivedData: ByteString, priorityInfo: Option[PriorityFrame]) extends State {
-      var receivedData = initiallyReceivedData
-
-      val handleEvent: PartialFunction[FrameEvent, Unit] = {
-        case ContinuationFrame(`streamId`, endHeaders, payload) =>
-          if (endHeaders) {
-            parseAndEmit(streamId, endStream, receivedData ++ payload, priorityInfo)
-            become(Idle)
-          } else {
-            receivedData ++= payload
-            pull(eventsIn)
-          }
-        case x => protocolError(s"While waiting for CONTINUATION frame on stream $streamId received unexpected frame $x")
+      class ReceivingHeaders(streamId: Int, endStream: Boolean, initiallyReceivedData: ByteString,
+          priorityInfo: Option[PriorityFrame]) extends State {
+        var receivedData = initiallyReceivedData
+
+        val handleEvent: PartialFunction[FrameEvent, Unit] = {
+          case ContinuationFrame(`streamId`, endHeaders, payload) =>
+            if (endHeaders) {
+              parseAndEmit(streamId, endStream, receivedData ++ payload, priorityInfo)
+              become(Idle)
+            } else {
+              receivedData ++= payload
+              pull(eventsIn)
+            }
+          case x =>
+            protocolError(s"While waiting for CONTINUATION frame on stream $streamId received unexpected frame $x")
+        }
       }
-    }
 
-    def protocolError(msg: String): Unit = failStage(new Http2ProtocolException(msg))
-  }
+      def protocolError(msg: String): Unit = failStage(new Http2ProtocolException(msg))
+    }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/Http2HeaderParsing.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/Http2HeaderParsing.scala
index 966e853cf..ac444a239 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/Http2HeaderParsing.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/hpack/Http2HeaderParsing.scala
@@ -61,8 +61,8 @@ private[akka] object Http2HeaderParsing {
 
   val Parsers: Map[String, HeaderParser[AnyRef]] =
     Seq(
-      Method, Scheme, Authority, PathAndQuery, ContentType, Status, ContentLength, Cookie
-    ).map(p => p.headerName -> p).toMap
+      Method, Scheme, Authority, PathAndQuery, ContentType, Status, ContentLength, Cookie).map(p =>
+      p.headerName -> p).toMap
 
   def parse(name: String, value: String, parserSettings: ParserSettings): (String, AnyRef) = {
     name -> Parsers.getOrElse(name, Modeled).parse(name, value, parserSettings)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/util/AsciiTreeLayout.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/util/AsciiTreeLayout.scala
index bd94f570a..756bc409c 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/http2/util/AsciiTreeLayout.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/http2/util/AsciiTreeLayout.scala
@@ -13,10 +13,10 @@ private[http2] object AsciiTreeLayout {
   // [info]   |
   // [info]   +-quux
   def toAscii[A](
-    top:       A,
-    children:  A => Seq[A],
-    display:   A => String,
-    maxColumn: Int         = 80): String = {
+      top: A,
+      children: A => Seq[A],
+      display: A => String,
+      maxColumn: Int = 80): String = {
     val twoSpaces = " " + " " // prevent accidentally being converted into a tab
     def limitLine(s: String): String =
       if (s.length > maxColumn) s.slice(0, maxColumn - 2) + ".."
@@ -24,11 +24,11 @@ private[http2] object AsciiTreeLayout {
     def insertBar(s: String, at: Int): String =
       if (at < s.length)
         s.slice(0, at) +
-          (s(at).toString match {
-            case " " => "|"
-            case x   => x
-          }) +
-          s.slice(at + 1, s.length)
+        (s(at).toString match {
+          case " " => "|"
+          case x   => x
+        }) +
+        s.slice(at + 1, s.length)
       else s
     def toAsciiLines(node: A, level: Int, parents: Set[A]): Vector[String] =
       if (parents contains node) // cycle
@@ -36,13 +36,14 @@ private[http2] object AsciiTreeLayout {
       else {
         val line = limitLine((twoSpaces * level) + (if (level == 0) "" else "+-") + display(node))
         val cs = Vector(children(node): _*)
-        val childLines = cs map {
+        val childLines = cs.map {
           toAsciiLines(_, level + 1, parents + node)
         }
-        val withBar = childLines.zipWithIndex flatMap {
-          case (lines, pos) if pos < (cs.size - 1) => lines map {
-            insertBar(_, 2 * (level + 1))
-          }
+        val withBar = childLines.zipWithIndex.flatMap {
+          case (lines, pos) if pos < (cs.size - 1) =>
+            lines.map {
+              insertBar(_, 2 * (level + 1))
+            }
           case (lines, pos) =>
             if (lines.last.trim != "") lines ++ Vector(twoSpaces * (level + 1))
             else lines
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BodyPartParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BodyPartParser.scala
index a21d35e99..dd9df854e 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BodyPartParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BodyPartParser.scala
@@ -27,18 +27,19 @@ import scala.collection.mutable.ListBuffer
  */
 @InternalApi
 private[http] final class BodyPartParser(
-  defaultContentType: ContentType,
-  boundary:           String,
-  log:                LoggingAdapter,
-  settings:           BodyPartParser.Settings)
-  extends GraphStage[FlowShape[ByteString, BodyPartParser.Output]] {
+    defaultContentType: ContentType,
+    boundary: String,
+    log: LoggingAdapter,
+    settings: BodyPartParser.Settings)
+    extends GraphStage[FlowShape[ByteString, BodyPartParser.Output]] {
   import BodyPartParser._
   import settings._
 
   require(boundary.nonEmpty, "'boundary' parameter of multipart Content-Type must be non-empty")
-  require(boundary.charAt(boundary.length - 1) != ' ', "'boundary' parameter of multipart Content-Type must not end with a space char")
+  require(boundary.charAt(boundary.length - 1) != ' ',
+    "'boundary' parameter of multipart Content-Type must not end with a space char")
   require(
-    boundaryChar matchesAll boundary,
+    boundaryChar.matchesAll(boundary),
     s"'boundary' parameter of multipart Content-Type contains illegal character '${boundaryChar.firstMismatch(boundary).get}'")
 
   sealed trait StateResult // phantom type for ensuring soundness of our parsing method setup
@@ -65,7 +66,7 @@ private[http] final class BodyPartParser(
           val elem = grab(in)
           try run(elem)
           catch {
-            case e: ParsingException => fail(e.info)
+            case e: ParsingException    => fail(e.info)
             case NotEnoughDataException =>
               // we are missing a try/catch{continue} wrapper somewhere
               throw new IllegalStateException("unexpected NotEnoughDataException", NotEnoughDataException)
@@ -130,7 +131,8 @@ private[http] final class BodyPartParser(
         try {
           @tailrec def rec(index: Int): StateResult = {
             val needleEnd = eolConfiguration.boyerMoore.nextIndex(input, index) + eolConfiguration.needle.length
-            if (eolConfiguration.isEndOfLine(input, needleEnd)) parseHeaderLines(input, needleEnd + eolConfiguration.eolLength)
+            if (eolConfiguration.isEndOfLine(input, needleEnd))
+              parseHeaderLines(input, needleEnd + eolConfiguration.eolLength)
             else if (doubleDash(input, needleEnd)) setShouldTerminate()
             else rec(needleEnd)
           }
@@ -140,8 +142,9 @@ private[http] final class BodyPartParser(
           case NotEnoughDataException => continue(input, 0)((newInput, _) => parsePreamble(newInput))
         }
 
-      @tailrec def parseHeaderLines(input: ByteString, lineStart: Int, headers: ListBuffer[HttpHeader] = ListBuffer[HttpHeader](),
-                                    headerCount: Int = 0, cth: Option[`Content-Type`] = None): StateResult = {
+      @tailrec def parseHeaderLines(input: ByteString, lineStart: Int,
+          headers: ListBuffer[HttpHeader] = ListBuffer[HttpHeader](),
+          headerCount: Int = 0, cth: Option[`Content-Type`] = None): StateResult = {
         def contentType =
           cth match {
             case Some(x) => x.contentType
@@ -164,7 +167,8 @@ private[http] final class BodyPartParser(
           case BoundaryHeader =>
             emit(BodyPartStart(headers.toList, _ => HttpEntity.empty(contentType)))
             val ix = lineStart + eolConfiguration.boundaryLength
-            if (eolConfiguration.isEndOfLine(input, ix)) parseHeaderLines(input, ix + eolConfiguration.eolLength, headers, headerCount, None)
+            if (eolConfiguration.isEndOfLine(input, ix))
+              parseHeaderLines(input, ix + eolConfiguration.eolLength, headers, headerCount, None)
             else if (doubleDash(input, ix)) setShouldTerminate()
             else fail("Illegal multipart boundary in message content")
 
@@ -184,24 +188,27 @@ private[http] final class BodyPartParser(
 
       // work-around for compiler complaining about non-tail-recursion if we inline this method
       def parseHeaderLinesAux(headers: ListBuffer[HttpHeader], headerCount: Int,
-                              cth: Option[`Content-Type`])(input: ByteString, lineStart: Int): StateResult =
+          cth: Option[`Content-Type`])(input: ByteString, lineStart: Int): StateResult =
         parseHeaderLines(input, lineStart, headers, headerCount, cth)
 
       def parseEntity(headers: List[HttpHeader], contentType: ContentType,
-                      emitPartChunk: (List[HttpHeader], ContentType, ByteString) => Unit = {
-                        (headers, ct, bytes) =>
-                          emit(BodyPartStart(headers, entityParts => HttpEntity.IndefiniteLength(
-                            ct,
-                            entityParts.collect { case EntityPart(data) => data })))
-                          emit(bytes)
-                      },
-                      emitFinalPartChunk: (List[HttpHeader], ContentType, ByteString) => Unit = {
-                        (headers, ct, bytes) =>
-                          emit(BodyPartStart(headers, { rest =>
-                            StreamUtils.cancelSource(rest)(materializer)
-                            HttpEntity.Strict(ct, bytes)
-                          }))
-                      })(input: ByteString, offset: Int): StateResult =
+          emitPartChunk: (List[HttpHeader], ContentType, ByteString) => Unit = {
+            (headers, ct, bytes) =>
+              emit(BodyPartStart(headers,
+                entityParts =>
+                  HttpEntity.IndefiniteLength(
+                    ct,
+                    entityParts.collect { case EntityPart(data) => data })))
+              emit(bytes)
+          },
+          emitFinalPartChunk: (List[HttpHeader], ContentType, ByteString) => Unit = {
+            (headers, ct, bytes) =>
+              emit(BodyPartStart(headers,
+                { rest =>
+                  StreamUtils.cancelSource(rest)(materializer)
+                  HttpEntity.Strict(ct, bytes)
+                }))
+          })(input: ByteString, offset: Int): StateResult =
         try {
           @tailrec def rec(index: Int): StateResult = {
             val currentPartEnd = eolConfiguration.boyerMoore.nextIndex(input, index)
@@ -226,7 +233,7 @@ private[http] final class BodyPartParser(
             if (emitEnd > offset) {
               emitPartChunk(headers, contentType, input.slice(offset, emitEnd))
               val simpleEmit: (List[HttpHeader], ContentType, ByteString) => Unit = (_, _, bytes) => emit(bytes)
-              continue(input drop emitEnd, 0)(parseEntity(null, null, simpleEmit, simpleEmit))
+              continue(input.drop(emitEnd), 0)(parseEntity(null, null, simpleEmit, simpleEmit))
             } else continue(input, offset)(parseEntity(headers, contentType, emitPartChunk, emitFinalPartChunk))
         }
 
@@ -244,8 +251,8 @@ private[http] final class BodyPartParser(
         state =
           math.signum(offset - input.length) match {
             case -1 => more => next(input ++ more, offset)
-            case 0 => next(_, 0)
-            case 1 => throw new IllegalStateException
+            case 0  => next(_, 0)
+            case 1  => throw new IllegalStateException
           }
         done()
       }
@@ -285,7 +292,8 @@ private[http] object BodyPartParser {
 
   sealed trait Output
   sealed trait PartStart extends Output
-  final case class BodyPartStart(headers: List[HttpHeader], createEntity: Source[Output, NotUsed] => BodyPartEntity) extends PartStart
+  final case class BodyPartStart(headers: List[HttpHeader], createEntity: Source[Output, NotUsed] => BodyPartEntity)
+      extends PartStart
   final case class EntityPart(data: ByteString) extends Output
   final case class ParseError(info: ErrorInfo) extends PartStart
 
@@ -318,7 +326,8 @@ private[http] object BodyPartParser {
 
     def isBoundary(input: ByteString, offset: Int, ix: Int = eolLength): Boolean = {
       @tailrec def process(input: ByteString, offset: Int, ix: Int): Boolean =
-        (ix == needle.length) || (byteAt(input, offset + ix - eol.length) == needle(ix)) && process(input, offset, ix + 1)
+        (ix == needle.length) || (byteAt(input, offset + ix - eol.length) == needle(ix)) && process(input, offset,
+          ix + 1)
 
       process(input, offset, ix)
     }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BoyerMoore.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BoyerMoore.scala
index 30517f11a..896507180 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BoyerMoore.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/BoyerMoore.scala
@@ -19,7 +19,7 @@ private[parsing] class BoyerMoore(needle: Array[Byte]) {
     val table = Array.fill(256)(needle.length)
     @tailrec def rec(i: Int): Unit =
       if (i < nl1) {
-        table(needle(i) & 0xff) = nl1 - i
+        table(needle(i) & 0xFF) = nl1 - i
         rec(i + 1)
       }
     rec(0)
@@ -61,7 +61,7 @@ private[parsing] class BoyerMoore(needle: Array[Byte]) {
       if (needle(j) == byte) {
         if (j == 0) i // found
         else rec(i - 1, j - 1)
-      } else rec(i + math.max(offsetTable(nl1 - j), charTable(byte & 0xff)), nl1)
+      } else rec(i + math.max(offsetTable(nl1 - j), charTable(byte & 0xFF)), nl1)
     }
     rec(offset + nl1, nl1)
   }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpHeaderParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpHeaderParser.scala
index 439614200..6e0f77e1e 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpHeaderParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpHeaderParser.scala
@@ -10,7 +10,10 @@ import java.lang.{ StringBuilder => JStringBuilder }
 
 import akka.annotation.InternalApi
 import akka.event.LoggingAdapter
-import akka.http.scaladsl.settings.ParserSettings.{ IllegalResponseHeaderValueProcessingMode, IllegalResponseHeaderNameProcessingMode }
+import akka.http.scaladsl.settings.ParserSettings.{
+  IllegalResponseHeaderNameProcessingMode,
+  IllegalResponseHeaderValueProcessingMode
+}
 import akka.http.scaladsl.settings.ParserSettings.ErrorLoggingVerbosity
 import akka.http.scaladsl.settings.ParserSettings
 
@@ -68,16 +71,16 @@ import akka.http.impl.model.parser.CharacterClasses._
  */
 @InternalApi
 private[engine] final class HttpHeaderParser private (
-  val settings:                      HttpHeaderParser.Settings,
-  val log:                           LoggingAdapter,
-  onIllegalHeader:                   ErrorInfo => Unit,
-  private[this] var nodes:           Array[Char]               = new Array(512), // initial size, can grow as needed
-  private[this] var nodeCount:       Int                       = 0,
-  private[this] var branchData:      Array[Short]              = new Array(254 * 3),
-  private[this] var branchDataCount: Int                       = 0,
-  private[this] var values:          Array[AnyRef]             = new Array(255), // fixed size of 255
-  private[this] var valueCount:      Int                       = 0,
-  private[this] var trieIsPrivate:   Boolean                   = false) { // signals the trie data can be mutated w/o having to copy first
+    val settings: HttpHeaderParser.Settings,
+    val log: LoggingAdapter,
+    onIllegalHeader: ErrorInfo => Unit,
+    private[this] var nodes: Array[Char] = new Array(512), // initial size, can grow as needed
+    private[this] var nodeCount: Int = 0,
+    private[this] var branchData: Array[Short] = new Array(254 * 3),
+    private[this] var branchDataCount: Int = 0,
+    private[this] var values: Array[AnyRef] = new Array(255), // fixed size of 255
+    private[this] var valueCount: Int = 0,
+    private[this] var trieIsPrivate: Boolean = false) { // signals the trie data can be mutated w/o having to copy first
 
   // TODO: evaluate whether switching to a value-class-based approach allows us to improve code readability without sacrificing performance
 
@@ -95,7 +98,8 @@ private[engine] final class HttpHeaderParser private (
    * Returns a copy of this parser that shares the trie data with this instance.
    */
   def createShallowCopy(): HttpHeaderParser =
-    new HttpHeaderParser(settings, log, onIllegalHeader, nodes, nodeCount, branchData, branchDataCount, values, valueCount)
+    new HttpHeaderParser(settings, log, onIllegalHeader, nodes, nodeCount, branchData, branchDataCount, values,
+      valueCount)
 
   /**
    * Parses a header line and returns the line start index of the subsequent line.
@@ -171,19 +175,22 @@ private[engine] final class HttpHeaderParser private (
   @tailrec private def scanHeaderNameAndReturnIndexOfColon(input: ByteString, start: Int, limit: Int)(ix: Int): Int =
     if (ix < limit)
       (byteChar(input, ix), settings.illegalResponseHeaderNameProcessingMode) match {
-        case (':', _) => ix
+        case (':', _)           => ix
         case (c, _) if tchar(c) => scanHeaderNameAndReturnIndexOfColon(input, start, limit)(ix + 1)
-        case (c, IllegalResponseHeaderNameProcessingMode.Error) => fail(s"Illegal character '${escape(c)}' in header name")
+        case (c, IllegalResponseHeaderNameProcessingMode.Error) =>
+          fail(s"Illegal character '${escape(c)}' in header name")
         case (c, IllegalResponseHeaderNameProcessingMode.Warn) =>
           log.warning(s"Header key contains illegal character '${escape(c)}'")
           scanHeaderNameAndReturnIndexOfColon(input, start, limit)(ix + 1)
         case (c, IllegalResponseHeaderNameProcessingMode.Ignore) =>
           scanHeaderNameAndReturnIndexOfColon(input, start, limit)(ix + 1)
       }
-    else fail(s"HTTP header name exceeds the configured limit of ${limit - start - 1} characters", StatusCodes.RequestHeaderFieldsTooLarge)
+    else fail(s"HTTP header name exceeds the configured limit of ${limit - start - 1} characters",
+      StatusCodes.RequestHeaderFieldsTooLarge)
 
   @tailrec
-  private def parseHeaderValue(input: ByteString, valueStart: Int, branch: ValueBranch)(cursor: Int = valueStart, nodeIx: Int = branch.branchRootNodeIx): Int = {
+  private def parseHeaderValue(input: ByteString, valueStart: Int, branch: ValueBranch)(cursor: Int = valueStart,
+      nodeIx: Int = branch.branchRootNodeIx): Int = {
     def parseAndInsertHeader() = {
       val (header, endIx) = branch.parser(this, input, valueStart, onIllegalHeader)
       if (branch.spaceLeft)
@@ -201,17 +208,17 @@ private[engine] final class HttpHeaderParser private (
     else node >>> 8 match {
       case 0 => parseAndInsertHeader()
       case msb => node & 0xFF match {
-        case 0 => // leaf node
-          resultHeader = values(msb - 1).asInstanceOf[HttpHeader]
-          cursor
-        case nodeChar => // branching node
-          val signum = math.signum(char - nodeChar)
-          branchData(rowIx(msb) + 1 + signum) match {
-            case 0 => parseAndInsertHeader() // header doesn't exist yet
-            case subNodeIx => // descend into branch and advance on char matches (otherwise descend but don't advance)
-              parseHeaderValue(input, valueStart, branch)(cursor + 1 - math.abs(signum), subNodeIx)
-          }
-      }
+          case 0 => // leaf node
+            resultHeader = values(msb - 1).asInstanceOf[HttpHeader]
+            cursor
+          case nodeChar => // branching node
+            val signum = math.signum(char - nodeChar)
+            branchData(rowIx(msb) + 1 + signum) match {
+              case 0 => parseAndInsertHeader() // header doesn't exist yet
+              case subNodeIx => // descend into branch and advance on char matches (otherwise descend but don't advance)
+                parseHeaderValue(input, valueStart, branch)(cursor + 1 - math.abs(signum), subNodeIx)
+            }
+        }
     }
   }
 
@@ -223,10 +230,11 @@ private[engine] final class HttpHeaderParser private (
    *  - the input is not a prefix of an already stored value, i.e. the input must be properly terminated (CRLF or colon)
    */
   @tailrec
-  private def insert(input: ByteString, value: AnyRef)(cursor: Int = 0, endIx: Int = input.length, nodeIx: Int = 0, colonIx: Int = 0): Unit = {
+  private def insert(input: ByteString, value: AnyRef)(cursor: Int = 0, endIx: Int = input.length, nodeIx: Int = 0,
+      colonIx: Int = 0): Unit = {
     val char =
-      if (cursor < colonIx) CharUtils.toLowerCase((input(cursor) & 0xff).toChar)
-      else if (cursor < endIx) (input(cursor) & 0xff).toChar
+      if (cursor < colonIx) CharUtils.toLowerCase((input(cursor) & 0xFF).toChar)
+      else if (cursor < endIx) (input(cursor) & 0xFF).toChar
       else '\u0000'
     val node = nodes(nodeIx)
     if (char == node) insert(input, value)(cursor + 1, endIx, nodeIx + 1, colonIx) // fast match, descend into only subnode
@@ -269,10 +277,11 @@ private[engine] final class HttpHeaderParser private (
    * CAUTION: this method must only be called if the trie data have already been "unshared"!
    */
   @tailrec
-  private def insertRemainingCharsAsNewNodes(input: ByteString, value: AnyRef)(cursor: Int = 0, endIx: Int = input.length, valueIx: Int = newValueIndex, colonIx: Int = 0): Unit = {
+  private def insertRemainingCharsAsNewNodes(input: ByteString, value: AnyRef)(cursor: Int = 0,
+      endIx: Int = input.length, valueIx: Int = newValueIndex, colonIx: Int = 0): Unit = {
     val newNodeIx = newNodeIndex
     if (cursor < endIx) {
-      val c = (input(cursor) & 0xff).toChar
+      val c = (input(cursor) & 0xFF).toChar
       val char = if (cursor < colonIx) CharUtils.toLowerCase(c) else c
       nodes(newNodeIx) = char
       insertRemainingCharsAsNewNodes(input, value)(cursor + 1, endIx, valueIx, colonIx)
@@ -323,7 +332,7 @@ private[engine] final class HttpHeaderParser private (
     def recurse(nodeIx: Int = 0): (Seq[List[String]], Int) = {
       def recurseAndPrefixLines(subNodeIx: Int, p1: String, p2: String, p3: String) = {
         val (lines, mainIx) = recurse(subNodeIx)
-        val prefixedLines = lines.zipWithIndex map {
+        val prefixedLines = lines.zipWithIndex.map {
           case (line, ix) => (if (ix < mainIx) p1 else if (ix > mainIx) p3 else p2) :: line
         }
         prefixedLines -> mainIx
@@ -337,31 +346,31 @@ private[engine] final class HttpHeaderParser private (
       node >>> 8 match {
         case 0 => recurseAndPrefixLines(nodeIx + 1, "  ", char + "-", "  ")
         case msb => node & 0xFF match {
-          case 0 => values(msb - 1) match {
-            case ValueBranch(_, valueParser, branchRootNodeIx, _) =>
-              val pad = " " * (valueParser.headerName.length + 3)
-              recurseAndPrefixLines(branchRootNodeIx, pad, "(" + valueParser.headerName + ")-", pad)
-            case vp: HeaderValueParser => Seq(" (" :: vp.headerName :: ")" :: Nil) -> 0
-            case value: RawHeader      => Seq(" *" :: value.toString :: Nil) -> 0
-            case value                 => Seq(" " :: value.toString :: Nil) -> 0
+            case 0 => values(msb - 1) match {
+                case ValueBranch(_, valueParser, branchRootNodeIx, _) =>
+                  val pad = " " * (valueParser.headerName.length + 3)
+                  recurseAndPrefixLines(branchRootNodeIx, pad, "(" + valueParser.headerName + ")-", pad)
+                case vp: HeaderValueParser => Seq(" (" :: vp.headerName :: ")" :: Nil) -> 0
+                case value: RawHeader      => Seq(" *" :: value.toString :: Nil) -> 0
+                case value                 => Seq(" " :: value.toString :: Nil) -> 0
+              }
+            case nodeChar =>
+              val rix = rowIx(msb)
+              val preLines = branchLines(rix, "  ", "┌─", "| ")
+              val postLines = branchLines(rix + 2, "| ", "└─", "  ")
+              val p1 = if (preLines.nonEmpty) "| " else "  "
+              val p3 = if (postLines.nonEmpty) "| " else "  "
+              val (matchLines, mainLineIx) = recurseAndPrefixLines(branchData(rix + 1), p1, char + '-', p3)
+              (preLines ++ matchLines ++ postLines, mainLineIx + preLines.size)
           }
-          case nodeChar =>
-            val rix = rowIx(msb)
-            val preLines = branchLines(rix, "  ", "┌─", "| ")
-            val postLines = branchLines(rix + 2, "| ", "└─", "  ")
-            val p1 = if (preLines.nonEmpty) "| " else "  "
-            val p3 = if (postLines.nonEmpty) "| " else "  "
-            val (matchLines, mainLineIx) = recurseAndPrefixLines(branchData(rix + 1), p1, char + '-', p3)
-            (preLines ++ matchLines ++ postLines, mainLineIx + preLines.size)
-        }
       }
     }
     val sb = new JStringBuilder()
     val (lines, mainLineIx) = recurse()
-    lines.zipWithIndex foreach {
+    lines.zipWithIndex.foreach {
       case (line, ix) =>
         sb.append(if (ix == mainLineIx) '-' else ' ')
-        line foreach (s => sb.append(s))
+        line.foreach(s => sb.append(s))
         sb.append('\n')
     }
     sb.toString
@@ -375,10 +384,11 @@ private[engine] final class HttpHeaderParser private (
       val node = nodes(nodeIx)
       node >>> 8 match {
         case 0 => build(nodeIx + 1)
-        case msb if (node & 0xFF) == 0 => values(msb - 1) match {
-          case ValueBranch(_, parser, _, count) => Map(parser.headerName -> count)
-          case _                                => Map.empty
-        }
+        case msb if (node & 0xFF) == 0 =>
+          values(msb - 1) match {
+            case ValueBranch(_, parser, _, count) => Map(parser.headerName -> count)
+            case _                                => Map.empty
+          }
         case msb =>
           def branch(ix: Int): Map[String, Int] = if (ix > 0) build(ix) else Map.empty
           val rix = rowIx(msb)
@@ -393,9 +403,9 @@ private[engine] final class HttpHeaderParser private (
    */
   def formatRawTrie: String = {
     def char(c: Char) = (c >> 8).toString + (if ((c & 0xFF) > 0) "/" + (c & 0xFF).toChar else "/Ω")
-    s"nodes: ${nodes take nodeCount map char mkString ", "}\n" +
-      s"branchData: ${branchData take branchDataCount grouped 3 map { case Array(a, b, c) => s"$a/$b/$c" } mkString ", "}\n" +
-      s"values: ${values take valueCount mkString ", "}"
+    s"nodes: ${nodes.take(nodeCount).map(char).mkString(", ")}\n" +
+    s"branchData: ${branchData.take(branchDataCount).grouped(3).map { case Array(a, b, c) => s"$a/$b/$c" }.mkString(", ")}\n" +
+    s"values: ${values.take(valueCount).mkString(", ")}"
   }
 
   /**
@@ -470,15 +480,16 @@ private[http] object HttpHeaderParser {
     "sec-websocket-protocol",
     "sec-websocket-version",
     "transfer-encoding",
-    "upgrade"
-  )
+    "upgrade")
 
   def apply(settings: HttpHeaderParser.Settings, log: LoggingAdapter) =
     prime(unprimed(settings, log, defaultIllegalHeaderHandler(settings, log)))
 
   def defaultIllegalHeaderHandler(settings: HttpHeaderParser.Settings, log: LoggingAdapter): ErrorInfo => Unit =
     if (settings.illegalHeaderWarnings)
-      info => logParsingError(info withSummaryPrepended "Illegal header", log, settings.errorLoggingVerbosity, settings.ignoreIllegalHeaderFor)
+      info =>
+        logParsingError(info.withSummaryPrepended("Illegal header"), log, settings.errorLoggingVerbosity,
+          settings.ignoreIllegalHeaderFor)
     else
       (_: ErrorInfo) => () // Does exactly what the label says - nothing
 
@@ -494,7 +505,8 @@ private[http] object HttpHeaderParser {
       HeaderParser.ruleNames
         .filter(headerParserFilter).iterator
         .map { name =>
-          new ModeledHeaderValueParser(name, parser.settings.maxHeaderValueLength, parser.settings.headerValueCacheLimit(name), parser.log, parser.settings)
+          new ModeledHeaderValueParser(name, parser.settings.maxHeaderValueLength,
+            parser.settings.headerValueCacheLimit(name), parser.log, parser.settings)
         }.to(scala.collection.immutable.IndexedSeq)
 
     def insertInGoodOrder(items: Seq[Any])(startIx: Int = 0, endIx: Int = items.size): Unit =
@@ -528,20 +540,23 @@ private[http] object HttpHeaderParser {
     parser.insertRemainingCharsAsNewNodes(input, value)()
 
   private[parsing] abstract class HeaderValueParser(val headerName: String, val maxValueCount: Int) {
-    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit): (HttpHeader, Int)
+    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit)
+        : (HttpHeader, Int)
     override def toString: String = s"HeaderValueParser[$headerName]"
     def cachingEnabled = maxValueCount > 0
   }
 
-  private[parsing] class ModeledHeaderValueParser(headerName: String, maxHeaderValueLength: Int, maxValueCount: Int, log: LoggingAdapter, settings: HeaderParser.Settings)
-    extends HeaderValueParser(headerName, maxValueCount) {
+  private[parsing] class ModeledHeaderValueParser(headerName: String, maxHeaderValueLength: Int, maxValueCount: Int,
+      log: LoggingAdapter, settings: HeaderParser.Settings)
+      extends HeaderValueParser(headerName, maxValueCount) {
     val parser = HeaderParser.lookupParser(headerName, settings).getOrElse(
-      throw new IllegalStateException(s"Missing parser for modeled [$headerName].")
-    )
+      throw new IllegalStateException(s"Missing parser for modeled [$headerName]."))
 
-    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit): (HttpHeader, Int) = {
+    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit)
+        : (HttpHeader, Int) = {
       // TODO: optimize by running the header value parser directly on the input ByteString (rather than an extracted String); seems done?
-      val (headerValue, endIx) = scanHeaderValue(hhp, input, valueStart, valueStart + maxHeaderValueLength + 2, log, settings.illegalResponseHeaderValueProcessingMode)()
+      val (headerValue, endIx) = scanHeaderValue(hhp, input, valueStart, valueStart + maxHeaderValueLength + 2, log,
+        settings.illegalResponseHeaderValueProcessingMode)()
       val trimmedHeaderValue = headerValue.trim
       val header = parser(trimmedHeaderValue) match {
         case HeaderParser.Success(h) => h
@@ -556,15 +571,19 @@ private[http] object HttpHeaderParser {
   }
 
   private[parsing] class RawHeaderValueParser(headerName: String, maxHeaderValueLength: Int, maxValueCount: Int,
-                                              log: LoggingAdapter, mode: IllegalResponseHeaderValueProcessingMode) extends HeaderValueParser(headerName, maxValueCount) {
-    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit): (HttpHeader, Int) = {
-      val (headerValue, endIx) = scanHeaderValue(hhp, input, valueStart, valueStart + maxHeaderValueLength + 2, log, mode)()
+      log: LoggingAdapter, mode: IllegalResponseHeaderValueProcessingMode)
+      extends HeaderValueParser(headerName, maxValueCount) {
+    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit)
+        : (HttpHeader, Int) = {
+      val (headerValue, endIx) =
+        scanHeaderValue(hhp, input, valueStart, valueStart + maxHeaderValueLength + 2, log, mode)()
       RawHeader(headerName, headerValue.trim) -> endIx
     }
   }
 
-  @tailrec private def scanHeaderValue(hhp: HttpHeaderParser, input: ByteString, start: Int, limit: Int, log: LoggingAdapter,
-                                       mode: IllegalResponseHeaderValueProcessingMode)(sb: JStringBuilder = null, ix: Int = start): (String, Int) = {
+  @tailrec private def scanHeaderValue(hhp: HttpHeaderParser, input: ByteString, start: Int, limit: Int,
+      log: LoggingAdapter,
+      mode: IllegalResponseHeaderValueProcessingMode)(sb: JStringBuilder = null, ix: Int = start): (String, Int) = {
     hhp.byteBuffer.clear()
 
     def appended(c: Char) = (if (sb != null) sb else new JStringBuilder(asciiString(input, start, ix))).append(c)
@@ -596,7 +615,8 @@ private[http] object HttpHeaderParser {
               hhp.byteBuffer.put(byteAt(input, ix + 2))
               nix = ix + 3
               hhp.decodeByteBuffer() match { // if we cannot decode as UTF8 we don't decode but simply copy
-                case -1 => if (sb != null) sb.append(c).append(byteChar(input, ix + 1)).append(byteChar(input, ix + 2)) else null
+                case -1 =>
+                  if (sb != null) sb.append(c).append(byteChar(input, ix + 1)).append(byteChar(input, ix + 2)) else null
                 case cc => appended2(cc)
               }
             } else if ((c & 0xF8) == 0xF0) { // 4-byte UTF-8 sequence?
@@ -606,7 +626,10 @@ private[http] object HttpHeaderParser {
               hhp.byteBuffer.put(byteAt(input, ix + 3))
               nix = ix + 4
               hhp.decodeByteBuffer() match { // if we cannot decode as UTF8 we don't decode but simply copy
-                case -1 => if (sb != null) sb.append(c).append(byteChar(input, ix + 1)).append(byteChar(input, ix + 2)).append(byteChar(input, ix + 3)) else null
+                case -1 =>
+                  if (sb != null) sb.append(c).append(byteChar(input, ix + 1)).append(byteChar(input, ix + 2)).append(
+                    byteChar(input, ix + 3))
+                  else null
                 case cc => appended2(cc)
               }
             } else {
@@ -625,10 +648,12 @@ private[http] object HttpHeaderParser {
             }
           scanHeaderValue(hhp, input, start, limit, log, mode)(nsb, nix)
       }
-    else fail(s"HTTP header value exceeds the configured limit of ${limit - start - 2} characters", StatusCodes.RequestHeaderFieldsTooLarge)
+    else fail(s"HTTP header value exceeds the configured limit of ${limit - start - 2} characters",
+      StatusCodes.RequestHeaderFieldsTooLarge)
   }
 
-  def fail(summary: String, status: StatusCode = StatusCodes.BadRequest) = throw new ParsingException(status, ErrorInfo(summary))
+  def fail(summary: String, status: StatusCode = StatusCodes.BadRequest) =
+    throw new ParsingException(status, ErrorInfo(summary))
 
   private object OutOfTrieSpaceException extends SingletonException
 
@@ -643,7 +668,8 @@ private[http] object HttpHeaderParser {
    * @param branchRootNodeIx the nodeIx for the root node of the trie branch holding all cached header values of this type
    * @param valueCount the number of values already stored in this header-type-specific branch
    */
-  private final case class ValueBranch(valueIx: Int, parser: HeaderValueParser, branchRootNodeIx: Int, valueCount: Int) {
+  private final case class ValueBranch(valueIx: Int, parser: HeaderValueParser, branchRootNodeIx: Int,
+      valueCount: Int) {
     def withValueCountIncreased = copy(valueCount = valueCount + 1)
     def spaceLeft = valueCount < parser.maxValueCount
   }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala
index 215239e1d..9858ba43a 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala
@@ -47,13 +47,14 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
   protected def settings: ParserSettings
   protected def headerParser: HttpHeaderParser
   protected def isResponseParser: Boolean
+
   /** invoked if the specified protocol is unknown */
   protected def onBadProtocol(input: ByteString): Nothing
   protected def parseMessage(input: ByteString, offset: Int): HttpMessageParser.StateResult
   protected def parseEntity(headers: List[HttpHeader], protocol: HttpProtocol, input: ByteString, bodyStart: Int,
-                            clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
-                            expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
-                            sslSession: SSLSession): HttpMessageParser.StateResult
+      clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
+      expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
+      sslSession: SSLSession): HttpMessageParser.StateResult
 
   protected final def initialHeaderBuffer: ListBuffer[HttpHeader] =
     if (settings.includeTlsSessionInfoHeader && tlsSessionInfoHeader != null) new ListBuffer() += tlsSessionInfoHeader
@@ -70,7 +71,7 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
     @tailrec def run(next: ByteString => StateResult): StateResult =
       (try next(input)
       catch {
-        case e: ParsingException => failMessageStart(e.status, e.info)
+        case e: ParsingException    => failMessageStart(e.status, e.info)
         case NotEnoughDataException =>
           // we are missing a try/catch{continue} wrapper somewhere
           throw new IllegalStateException("unexpected NotEnoughDataException", NotEnoughDataException)
@@ -134,11 +135,12 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
    * @param e100c expect 100 continue
    * @param hh host header seen
    */
-  @tailrec protected final def parseHeaderLines(input: ByteString, lineStart: Int, headers: ListBuffer[HttpHeader] = initialHeaderBuffer,
-                                                headerCount: Int = 0, ch: Option[Connection] = None,
-                                                clh: Option[`Content-Length`] = None, cth: Option[`Content-Type`] = None,
-                                                isChunked: Boolean = false, e100c: Boolean = false,
-                                                hh: Boolean = false): StateResult =
+  @tailrec protected final def parseHeaderLines(input: ByteString, lineStart: Int,
+      headers: ListBuffer[HttpHeader] = initialHeaderBuffer,
+      headerCount: Int = 0, ch: Option[Connection] = None,
+      clh: Option[`Content-Length`] = None, cth: Option[`Content-Type`] = None,
+      isChunked: Boolean = false, e100c: Boolean = false,
+      hh: Boolean = false): StateResult =
     if (headerCount < settings.maxHeaderCount) {
       var lineEnd = 0
       val resultHeader =
@@ -149,7 +151,8 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
           case NotEnoughDataException => null
         }
       resultHeader match {
-        case null => continue(input, lineStart)(parseHeaderLinesAux(headers, headerCount, ch, clh, cth, isChunked, e100c, hh))
+        case null =>
+          continue(input, lineStart)(parseHeaderLinesAux(headers, headerCount, ch, clh, cth, isChunked, e100c, hh))
 
         case EmptyHeader =>
           val close = HttpMessage.connectionCloseExpected(protocol, ch)
@@ -157,26 +160,30 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
           parseEntity(headers.toList, protocol, input, lineEnd, clh, cth, isChunked, e100c, hh, close, lastSession)
 
         case h: `Content-Length` => clh match {
-          case None      => parseHeaderLines(input, lineEnd, headers, headerCount + 1, ch, Some(h), cth, isChunked, e100c, hh)
-          case Some(`h`) => parseHeaderLines(input, lineEnd, headers, headerCount, ch, clh, cth, isChunked, e100c, hh)
-          case _         => failMessageStart("HTTP message must not contain more than one Content-Length header")
-        }
+            case None =>
+              parseHeaderLines(input, lineEnd, headers, headerCount + 1, ch, Some(h), cth, isChunked, e100c, hh)
+            case Some(`h`) => parseHeaderLines(input, lineEnd, headers, headerCount, ch, clh, cth, isChunked, e100c, hh)
+            case _         => failMessageStart("HTTP message must not contain more than one Content-Length header")
+          }
         case h: `Content-Type` => cth match {
-          case None =>
-            parseHeaderLines(input, lineEnd, headers, headerCount + 1, ch, clh, Some(h), isChunked, e100c, hh)
-          case Some(`h`) =>
-            parseHeaderLines(input, lineEnd, headers, headerCount, ch, clh, cth, isChunked, e100c, hh)
-          case Some(`Content-Type`(ContentTypes.`NoContentType`)) => // never encountered except when parsing conflicting headers (see below)
-            parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
-          case Some(x) =>
-            import ConflictingContentTypeHeaderProcessingMode._
-            settings.conflictingContentTypeHeaderProcessingMode match {
-              case Error         => failMessageStart("HTTP message must not contain more than one Content-Type header")
-              case First         => parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
-              case Last          => parseHeaderLines(input, lineEnd, headers += x, headerCount + 1, ch, clh, Some(h), isChunked, e100c, hh)
-              case NoContentType => parseHeaderLines(input, lineEnd, headers += x += h, headerCount + 1, ch, clh, Some(`Content-Type`(ContentTypes.`NoContentType`)), isChunked, e100c, hh)
-            }
-        }
+            case None =>
+              parseHeaderLines(input, lineEnd, headers, headerCount + 1, ch, clh, Some(h), isChunked, e100c, hh)
+            case Some(`h`) =>
+              parseHeaderLines(input, lineEnd, headers, headerCount, ch, clh, cth, isChunked, e100c, hh)
+            case Some(`Content-Type`(ContentTypes.`NoContentType`)) => // never encountered except when parsing conflicting headers (see below)
+              parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
+            case Some(x) =>
+              import ConflictingContentTypeHeaderProcessingMode._
+              settings.conflictingContentTypeHeaderProcessingMode match {
+                case Error => failMessageStart("HTTP message must not contain more than one Content-Type header")
+                case First =>
+                  parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
+                case Last => parseHeaderLines(input, lineEnd, headers += x, headerCount + 1, ch, clh, Some(h),
+                    isChunked, e100c, hh)
+                case NoContentType => parseHeaderLines(input, lineEnd, headers += x += h, headerCount + 1, ch, clh,
+                    Some(`Content-Type`(ContentTypes.`NoContentType`)), isChunked, e100c, hh)
+              }
+          }
         case h: `Transfer-Encoding` =>
           if (!isChunked) {
             h.encodings match {
@@ -194,28 +201,33 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
             failMessageStart("Multiple Transfer-Encoding entries not supported")
           }
         case h: Connection => ch match {
-          case None    => parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, Some(h), clh, cth, isChunked, e100c, hh)
-          case Some(x) => parseHeaderLines(input, lineEnd, headers, headerCount, Some(x append h.tokens), clh, cth, isChunked, e100c, hh)
-        }
+            case None =>
+              parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, Some(h), clh, cth, isChunked, e100c, hh)
+            case Some(x) => parseHeaderLines(input, lineEnd, headers, headerCount, Some(x.append(h.tokens)), clh, cth,
+                isChunked, e100c, hh)
+          }
         case h: Host =>
-          if (!hh || isResponseParser) parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh = true)
+          if (!hh || isResponseParser)
+            parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh = true)
           else failMessageStart("HTTP message must not contain more than one Host header")
 
-        case h: Expect => parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c = true, hh)
+        case h: Expect =>
+          parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c = true, hh)
 
-        case h         => parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
+        case h => parseHeaderLines(input, lineEnd, headers += h, headerCount + 1, ch, clh, cth, isChunked, e100c, hh)
       }
-    } else failMessageStart(s"HTTP message contains more than the configured limit of ${settings.maxHeaderCount} headers")
+    } else
+      failMessageStart(s"HTTP message contains more than the configured limit of ${settings.maxHeaderCount} headers")
 
   // work-around for compiler complaining about non-tail-recursion if we inline this method
   private def parseHeaderLinesAux(headers: ListBuffer[HttpHeader], headerCount: Int, ch: Option[Connection],
-                                  clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
-                                  e100c: Boolean, hh: Boolean)(input: ByteString, lineStart: Int): StateResult =
+      clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
+      e100c: Boolean, hh: Boolean)(input: ByteString, lineStart: Int): StateResult =
     parseHeaderLines(input, lineStart, headers, headerCount, ch, clh, cth, isChunked, e100c, hh)
 
   protected final def parseFixedLengthBody(
-    remainingBodyBytes: Long,
-    isLastMessage:      Boolean)(input: ByteString, bodyStart: Int): StateResult = {
+      remainingBodyBytes: Long,
+      isLastMessage: Boolean)(input: ByteString, bodyStart: Int): StateResult = {
     val remainingInputBytes = input.length - bodyStart
     if (remainingInputBytes > 0) {
       if (remainingInputBytes < remainingBodyBytes) {
@@ -232,9 +244,10 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
     } else continue(input, bodyStart)(parseFixedLengthBody(remainingBodyBytes, isLastMessage))
   }
 
-  protected final def parseChunk(input: ByteString, offset: Int, isLastMessage: Boolean, totalBytesRead: Long): StateResult = {
+  protected final def parseChunk(
+      input: ByteString, offset: Int, isLastMessage: Boolean, totalBytesRead: Long): StateResult = {
     @tailrec def parseTrailer(extension: String, lineStart: Int, headers: List[HttpHeader] = Nil,
-                              headerCount: Int = 0): StateResult = {
+        headerCount: Int = 0): StateResult = {
       var errorInfo: ErrorInfo = null
       val lineEnd =
         try headerParser.parseHeaderLine(input, lineStart)()
@@ -243,7 +256,8 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
         headerParser.resultHeader match {
           case EmptyHeader =>
             val lastChunk =
-              if (extension.isEmpty && headers.isEmpty) HttpEntity.LastChunk else HttpEntity.LastChunk(extension, headers)
+              if (extension.isEmpty && headers.isEmpty) HttpEntity.LastChunk
+              else HttpEntity.LastChunk(extension, headers)
             emit(EntityChunk(lastChunk))
             emit(MessageEnd)
             setCompletionHandling(CompletionOk)
@@ -251,7 +265,8 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
             else startNewMessage(input, lineEnd)
           case header if headerCount < settings.maxHeaderCount =>
             parseTrailer(extension, lineEnd, header :: headers, headerCount + 1)
-          case _ => failEntityStream(s"Chunk trailer contains more than the configured limit of ${settings.maxHeaderCount} headers")
+          case _ => failEntityStream(
+              s"Chunk trailer contains more than the configured limit of ${settings.maxHeaderCount} headers")
         }
       } else failEntityStream(errorInfo)
     }
@@ -265,8 +280,8 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
         }
         byteChar(input, chunkBodyEnd) match {
           case '\r' if byteChar(input, chunkBodyEnd + 1) == '\n' => result(2)
-          case '\n' => result(1)
-          case x => failEntityStream("Illegal chunk termination")
+          case '\n'                                              => result(1)
+          case x                                                 => failEntityStream("Illegal chunk termination")
         }
       } else parseTrailer(extension, cursor)
 
@@ -275,20 +290,22 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
         def extension = asciiString(input, startIx, cursor)
         byteChar(input, cursor) match {
           case '\r' if byteChar(input, cursor + 1) == '\n' => parseChunkBody(chunkSize, extension, cursor + 2)
-          case '\n' => parseChunkBody(chunkSize, extension, cursor + 1)
-          case _ => parseChunkExtensions(chunkSize, cursor + 1)(startIx)
+          case '\n'                                        => parseChunkBody(chunkSize, extension, cursor + 1)
+          case _                                           => parseChunkExtensions(chunkSize, cursor + 1)(startIx)
         }
-      } else failEntityStream(s"HTTP chunk extension length exceeds configured limit of ${settings.maxChunkExtLength} characters")
+      } else failEntityStream(
+        s"HTTP chunk extension length exceeds configured limit of ${settings.maxChunkExtLength} characters")
 
     @tailrec def parseSize(cursor: Int, size: Long): StateResult =
       if (size <= settings.maxChunkSize) {
         byteChar(input, cursor) match {
           case c if CharacterClasses.HEXDIG(c) => parseSize(cursor + 1, size * 16 + CharUtils.hexValue(c))
-          case ';' if cursor > offset => parseChunkExtensions(size.toInt, cursor + 1)()
-          case '\r' if cursor > offset && byteChar(input, cursor + 1) == '\n' => parseChunkBody(size.toInt, "", cursor + 2)
-          case '\n' if cursor > offset => parseChunkBody(size.toInt, "", cursor + 1)
+          case ';' if cursor > offset          => parseChunkExtensions(size.toInt, cursor + 1)()
+          case '\r' if cursor > offset && byteChar(input, cursor + 1) == '\n' =>
+            parseChunkBody(size.toInt, "", cursor + 2)
+          case '\n' if cursor > offset      => parseChunkBody(size.toInt, "", cursor + 1)
           case c if CharacterClasses.WSP(c) => parseSize(cursor + 1, size) // illegal according to the spec but can happen, see issue #1812
-          case c => failEntityStream(s"Illegal character '${escape(c)}' in chunk start")
+          case c                            => failEntityStream(s"Illegal character '${escape(c)}' in chunk start")
         }
       } else failEntityStream(s"HTTP chunk size exceeds the configured limit of ${settings.maxChunkSize} bytes")
 
@@ -326,9 +343,12 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
   }
 
   protected final def failMessageStart(summary: String): StateResult = failMessageStart(summary, "")
-  protected final def failMessageStart(summary: String, detail: String): StateResult = failMessageStart(StatusCodes.BadRequest, summary, detail)
-  protected final def failMessageStart(status: StatusCode): StateResult = failMessageStart(status, status.defaultMessage)
-  protected final def failMessageStart(status: StatusCode, summary: String, detail: String = ""): StateResult = failMessageStart(status, ErrorInfo(summary, detail))
+  protected final def failMessageStart(summary: String, detail: String): StateResult =
+    failMessageStart(StatusCodes.BadRequest, summary, detail)
+  protected final def failMessageStart(status: StatusCode): StateResult =
+    failMessageStart(status, status.defaultMessage)
+  protected final def failMessageStart(status: StatusCode, summary: String, detail: String = ""): StateResult =
+    failMessageStart(status, ErrorInfo(summary, detail))
   protected final def failMessageStart(status: StatusCode, info: ErrorInfo): StateResult = {
     emit(MessageStartError(status, info))
     setCompletionHandling(CompletionOk)
@@ -336,7 +356,8 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
   }
 
   protected final def failEntityStream(summary: String): StateResult = failEntityStream(summary, "")
-  protected final def failEntityStream(summary: String, detail: String): StateResult = failEntityStream(ErrorInfo(summary, detail))
+  protected final def failEntityStream(summary: String, detail: String): StateResult =
+    failEntityStream(ErrorInfo(summary, detail))
   protected final def failEntityStream(info: ErrorInfo): StateResult = {
     emit(EntityStreamError(info))
     setCompletionHandling(CompletionOk)
@@ -363,7 +384,7 @@ private[http] trait HttpMessageParser[Output >: MessageOutput <: ParserOutput] {
     StrictEntityCreator(if (cth.isDefined) HttpEntity.empty(cth.get.contentType) else HttpEntity.Empty)
 
   protected final def strictEntity(cth: Option[`Content-Type`], input: ByteString, bodyStart: Int,
-                                   contentLength: Int): StrictEntityCreator[Output, UniversalEntity] =
+      contentLength: Int): StrictEntityCreator[Output, UniversalEntity] =
     StrictEntityCreator(HttpEntity.Strict(contentType(cth), input.slice(bodyStart, bodyStart + contentLength)))
 
   protected final def defaultEntity[A <: ParserOutput](cth: Option[`Content-Type`], contentLength: Long) =
@@ -402,7 +423,8 @@ private[http] object HttpMessageParser {
   val CompletionIsMessageStartError: CompletionHandling =
     () => Some(ParserOutput.MessageStartError(StatusCodes.BadRequest, ErrorInfo("Illegal HTTP message start")))
   val CompletionIsEntityStreamError: CompletionHandling =
-    () => Some(ParserOutput.EntityStreamError(ErrorInfo(
-      "Entity stream truncation. The HTTP parser was receiving an entity when the underlying connection was " +
+    () =>
+      Some(ParserOutput.EntityStreamError(ErrorInfo(
+        "Entity stream truncation. The HTTP parser was receiving an entity when the underlying connection was " +
         "closed unexpectedly.")))
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpRequestParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpRequestParser.scala
index 56fcce514..3bb8ab1cc 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpRequestParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpRequestParser.scala
@@ -29,11 +29,11 @@ import akka.stream.stage.{ GraphStage, GraphStageLogic, InHandler, OutHandler }
  */
 @InternalApi
 private[http] final class HttpRequestParser(
-  settings:            ParserSettings,
-  websocketSettings:   WebSocketSettings,
-  rawRequestUriHeader: Boolean,
-  headerParser:        HttpHeaderParser)
-  extends GraphStage[FlowShape[SessionBytes, RequestOutput]] { self =>
+    settings: ParserSettings,
+    websocketSettings: WebSocketSettings,
+    rawRequestUriHeader: Boolean,
+    headerParser: HttpHeaderParser)
+    extends GraphStage[FlowShape[SessionBytes, RequestOutput]] { self =>
 
   import settings._
 
@@ -44,199 +44,207 @@ private[http] final class HttpRequestParser(
 
   override protected def initialAttributes: Attributes = Attributes.name("HttpRequestParser")
 
-  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with HttpMessageParser[RequestOutput] with InHandler with OutHandler {
+  override def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+    new GraphStageLogic(shape) with HttpMessageParser[RequestOutput] with InHandler with OutHandler {
 
-    import HttpMessageParser._
+      import HttpMessageParser._
 
-    override val settings = self.settings
-    override val headerParser = self.headerParser.createShallowCopy()
-    override val isResponseParser = false
+      override val settings = self.settings
+      override val headerParser = self.headerParser.createShallowCopy()
+      override val isResponseParser = false
 
-    private[this] var method: HttpMethod = _
-    private[this] var uri: Uri = _
-    private[this] var uriBytes: ByteString = _
+      private[this] var method: HttpMethod = _
+      private[this] var uri: Uri = _
+      private[this] var uriBytes: ByteString = _
 
-    override def onPush(): Unit = handleParserOutput(parseSessionBytes(grab(in)))
-    override def onPull(): Unit = handleParserOutput(doPull())
+      override def onPush(): Unit = handleParserOutput(parseSessionBytes(grab(in)))
+      override def onPull(): Unit = handleParserOutput(doPull())
 
-    override def onUpstreamFinish(): Unit =
-      if (super.shouldComplete()) completeStage()
-      else if (isAvailable(out)) handleParserOutput(doPull())
+      override def onUpstreamFinish(): Unit =
+        if (super.shouldComplete()) completeStage()
+        else if (isAvailable(out)) handleParserOutput(doPull())
 
-    setHandlers(in, out, this)
+      setHandlers(in, out, this)
 
-    private def handleParserOutput(output: RequestOutput): Unit = {
-      output match {
-        case StreamEnd    => completeStage()
-        case NeedMoreData => pull(in)
-        case x            => push(out, x)
+      private def handleParserOutput(output: RequestOutput): Unit = {
+        output match {
+          case StreamEnd    => completeStage()
+          case NeedMoreData => pull(in)
+          case x            => push(out, x)
+        }
       }
-    }
 
-    override def parseMessage(input: ByteString, offset: Int): StateResult =
-      if (offset < input.length) {
-        var cursor = parseMethod(input, offset)
-        cursor = parseRequestTarget(input, cursor)
-        cursor = parseProtocol(input, cursor)
-        if (byteChar(input, cursor) == '\r' && byteChar(input, cursor + 1) == '\n')
-          parseHeaderLines(input, cursor + 2)
-        else if (byteChar(input, cursor) == '\n')
-          parseHeaderLines(input, cursor + 1)
-        else onBadProtocol(input.drop(cursor))
-      } else
-        // Without HTTP pipelining it's likely that buffer is exhausted after reading one message,
-        // so we check above explicitly if we are done and stop work here without running into NotEnoughDataException
-        // when continuing to parse.
-        continue(startNewMessage)
-
-    def parseMethod(input: ByteString, cursor: Int): Int = {
-      @tailrec def parseCustomMethod(ix: Int = 0, sb: JStringBuilder = new JStringBuilder(16)): Int =
-        if (ix < maxMethodLength) {
-          byteChar(input, cursor + ix) match {
-            case ' ' =>
-              customMethods(sb.toString) match {
-                case Some(m) =>
-                  method = m
-                  cursor + ix + 1
-                case None => throw new ParsingException(NotImplemented, ErrorInfo("Unsupported HTTP method", sb.toString))
-              }
-            case c => parseCustomMethod(ix + 1, sb.append(c))
-          }
+      override def parseMessage(input: ByteString, offset: Int): StateResult =
+        if (offset < input.length) {
+          var cursor = parseMethod(input, offset)
+          cursor = parseRequestTarget(input, cursor)
+          cursor = parseProtocol(input, cursor)
+          if (byteChar(input, cursor) == '\r' && byteChar(input, cursor + 1) == '\n')
+            parseHeaderLines(input, cursor + 2)
+          else if (byteChar(input, cursor) == '\n')
+            parseHeaderLines(input, cursor + 1)
+          else onBadProtocol(input.drop(cursor))
         } else
-          throw new ParsingException(
-            BadRequest,
-            ErrorInfo("Unsupported HTTP method", s"HTTP method too long (started with '${sb.toString}')$remoteAddressStr. " +
-              "Increase `akka.http.server.parsing.max-method-length` to support HTTP methods with more characters."))
-
-      @tailrec def parseMethod(meth: HttpMethod, ix: Int = 1): Int =
-        if (ix == meth.value.length)
-          if (byteChar(input, cursor + ix) == ' ') {
-            method = meth
-            cursor + ix + 1
-          } else parseCustomMethod()
-        else if (byteChar(input, cursor + ix) == meth.value.charAt(ix)) parseMethod(meth, ix + 1)
-        else parseCustomMethod()
-
-      import HttpMethods._
-      (byteChar(input, cursor): @switch) match {
-        case 'G' => parseMethod(GET)
-        case 'P' => byteChar(input, cursor + 1) match {
-          case 'O' => parseMethod(POST, 2)
-          case 'U' => parseMethod(PUT, 2)
-          case 'A' => parseMethod(PATCH, 2)
-          case _   => parseCustomMethod()
-        }
-        case 'D' => parseMethod(DELETE)
-        case 'H' => parseMethod(HEAD)
-        case 'O' => parseMethod(OPTIONS)
-        case 'T' => parseMethod(TRACE)
-        case 'C' => parseMethod(CONNECT)
-        case 0x16 =>
-          throw new ParsingException(
-            BadRequest,
-            ErrorInfo(
-              "Unsupported HTTP method",
-              s"The HTTP method started with 0x16 rather than any known HTTP method$remoteAddressStr. " +
+          // Without HTTP pipelining it's likely that buffer is exhausted after reading one message,
+          // so we check above explicitly if we are done and stop work here without running into NotEnoughDataException
+          // when continuing to parse.
+          continue(startNewMessage)
+
+      def parseMethod(input: ByteString, cursor: Int): Int = {
+        @tailrec def parseCustomMethod(ix: Int = 0, sb: JStringBuilder = new JStringBuilder(16)): Int =
+          if (ix < maxMethodLength) {
+            byteChar(input, cursor + ix) match {
+              case ' ' =>
+                customMethods(sb.toString) match {
+                  case Some(m) =>
+                    method = m
+                    cursor + ix + 1
+                  case None =>
+                    throw new ParsingException(NotImplemented, ErrorInfo("Unsupported HTTP method", sb.toString))
+                }
+              case c => parseCustomMethod(ix + 1, sb.append(c))
+            }
+          } else
+            throw new ParsingException(
+              BadRequest,
+              ErrorInfo("Unsupported HTTP method",
+                s"HTTP method too long (started with '${sb.toString}')$remoteAddressStr. " +
+                "Increase `akka.http.server.parsing.max-method-length` to support HTTP methods with more characters."))
+
+        @tailrec def parseMethod(meth: HttpMethod, ix: Int = 1): Int =
+          if (ix == meth.value.length)
+            if (byteChar(input, cursor + ix) == ' ') {
+              method = meth
+              cursor + ix + 1
+            } else parseCustomMethod()
+          else if (byteChar(input, cursor + ix) == meth.value.charAt(ix)) parseMethod(meth, ix + 1)
+          else parseCustomMethod()
+
+        import HttpMethods._
+        (byteChar(input, cursor): @switch) match {
+          case 'G' => parseMethod(GET)
+          case 'P' => byteChar(input, cursor + 1) match {
+              case 'O' => parseMethod(POST, 2)
+              case 'U' => parseMethod(PUT, 2)
+              case 'A' => parseMethod(PATCH, 2)
+              case _   => parseCustomMethod()
+            }
+          case 'D' => parseMethod(DELETE)
+          case 'H' => parseMethod(HEAD)
+          case 'O' => parseMethod(OPTIONS)
+          case 'T' => parseMethod(TRACE)
+          case 'C' => parseMethod(CONNECT)
+          case 0x16 =>
+            throw new ParsingException(
+              BadRequest,
+              ErrorInfo(
+                "Unsupported HTTP method",
+                s"The HTTP method started with 0x16 rather than any known HTTP method$remoteAddressStr. " +
                 "Perhaps this was an HTTPS request sent to an HTTP endpoint?"))
-        case _ => parseCustomMethod()
-      }
-    }
-
-    val uriParser = new UriParser(null: ParserInput, uriParsingMode = uriParsingMode)
-
-    def parseRequestTarget(input: ByteString, cursor: Int): Int = {
-      val uriStart = cursor
-      val uriEndLimit = cursor + maxUriLength
-
-      @tailrec def findUriEnd(ix: Int = cursor): Int =
-        if (ix == input.length) throw NotEnoughDataException
-        else if (CharacterClasses.WSPCRLF(input(ix).toChar)) ix
-        else if (ix < uriEndLimit) findUriEnd(ix + 1)
-        else throw new ParsingException(
-          UriTooLong,
-          s"URI length exceeds the configured limit of $maxUriLength characters$remoteAddressStr")
-
-      val uriEnd = findUriEnd()
-      try {
-        uriBytes = input.slice(uriStart, uriEnd)
-        uriParser.reset(new ByteStringParserInput(uriBytes))
-        uri = uriParser.parseHttpRequestTarget()
-      } catch {
-        case IllegalUriException(info) => throw new ParsingException(BadRequest, info)
+          case _ => parseCustomMethod()
+        }
       }
-      uriEnd + 1
-    }
 
-    override def onBadProtocol(input: ByteString): Nothing = throw new ParsingException(HttpVersionNotSupported, "")
-
-    // http://tools.ietf.org/html/rfc7230#section-3.3
-    override def parseEntity(headers: List[HttpHeader], protocol: HttpProtocol, input: ByteString, bodyStart: Int,
-                             clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
-                             expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
-                             sslSession: SSLSession): StateResult =
-      if (hostHeaderPresent || protocol == HttpProtocols.`HTTP/1.0`) {
-        def emitRequestStart(
-          createEntity: EntityCreator[RequestOutput, RequestEntity],
-          headers:      List[HttpHeader]                            = headers) = {
-          val allHeaders0 =
-            if (rawRequestUriHeader) `Raw-Request-URI`(uriBytes.decodeString(HttpCharsets.`US-ASCII`.nioCharset)) :: headers
-            else headers
-
-          val attributes: Map[AttributeKey[_], Any] =
-            if (settings.includeSslSessionAttribute) Map(AttributeKeys.sslSession -> SslSessionInfo(sslSession))
-            else Map.empty
-
-          val requestStart =
-            if (method == HttpMethods.GET) {
-              Handshake.Server.websocketUpgrade(headers, hostHeaderPresent, websocketSettings, headerParser.log) match {
-                case OptionVal.Some(upgrade) =>
-                  RequestStart(method, uri, protocol, attributes.updated(AttributeKeys.webSocketUpgrade, upgrade), upgrade :: allHeaders0, createEntity, expect100continue, closeAfterResponseCompletion)
-                case OptionVal.None =>
-                  RequestStart(method, uri, protocol, attributes, allHeaders0, createEntity, expect100continue, closeAfterResponseCompletion)
-              }
-            } else RequestStart(method, uri, protocol, attributes, allHeaders0, createEntity, expect100continue, closeAfterResponseCompletion)
-
-          emit(requestStart)
+      val uriParser = new UriParser(null: ParserInput, uriParsingMode = uriParsingMode)
+
+      def parseRequestTarget(input: ByteString, cursor: Int): Int = {
+        val uriStart = cursor
+        val uriEndLimit = cursor + maxUriLength
+
+        @tailrec def findUriEnd(ix: Int = cursor): Int =
+          if (ix == input.length) throw NotEnoughDataException
+          else if (CharacterClasses.WSPCRLF(input(ix).toChar)) ix
+          else if (ix < uriEndLimit) findUriEnd(ix + 1)
+          else throw new ParsingException(
+            UriTooLong,
+            s"URI length exceeds the configured limit of $maxUriLength characters$remoteAddressStr")
+
+        val uriEnd = findUriEnd()
+        try {
+          uriBytes = input.slice(uriStart, uriEnd)
+          uriParser.reset(new ByteStringParserInput(uriBytes))
+          uri = uriParser.parseHttpRequestTarget()
+        } catch {
+          case IllegalUriException(info) => throw new ParsingException(BadRequest, info)
         }
+        uriEnd + 1
+      }
 
-        if (!isChunked) {
-          val contentLength = clh match {
-            case Some(`Content-Length`(len)) => len
-            case None                        => 0
-          }
-          if (contentLength == 0) {
-            emitRequestStart(emptyEntity(cth))
-            setCompletionHandling(HttpMessageParser.CompletionOk)
-            startNewMessage(input, bodyStart)
-          } else if (!method.isEntityAccepted) {
-            failMessageStart(UnprocessableEntity, s"${method.name} requests must not have an entity")
-          } else if (contentLength <= input.size - bodyStart) {
-            val cl = contentLength.toInt
-            emitRequestStart(strictEntity(cth, input, bodyStart, cl))
-            setCompletionHandling(HttpMessageParser.CompletionOk)
-            startNewMessage(input, bodyStart + cl)
-          } else {
-            emitRequestStart(defaultEntity(cth, contentLength))
-            parseFixedLengthBody(contentLength, closeAfterResponseCompletion)(input, bodyStart)
+      override def onBadProtocol(input: ByteString): Nothing = throw new ParsingException(HttpVersionNotSupported, "")
+
+      // http://tools.ietf.org/html/rfc7230#section-3.3
+      override def parseEntity(headers: List[HttpHeader], protocol: HttpProtocol, input: ByteString, bodyStart: Int,
+          clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
+          expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
+          sslSession: SSLSession): StateResult =
+        if (hostHeaderPresent || protocol == HttpProtocols.`HTTP/1.0`) {
+          def emitRequestStart(
+              createEntity: EntityCreator[RequestOutput, RequestEntity],
+              headers: List[HttpHeader] = headers) = {
+            val allHeaders0 =
+              if (rawRequestUriHeader)
+                `Raw-Request-URI`(uriBytes.decodeString(HttpCharsets.`US-ASCII`.nioCharset)) :: headers
+              else headers
+
+            val attributes: Map[AttributeKey[_], Any] =
+              if (settings.includeSslSessionAttribute) Map(AttributeKeys.sslSession -> SslSessionInfo(sslSession))
+              else Map.empty
+
+            val requestStart =
+              if (method == HttpMethods.GET) {
+                Handshake.Server.websocketUpgrade(headers, hostHeaderPresent, websocketSettings,
+                  headerParser.log) match {
+                  case OptionVal.Some(upgrade) =>
+                    RequestStart(method, uri, protocol, attributes.updated(AttributeKeys.webSocketUpgrade, upgrade),
+                      upgrade :: allHeaders0, createEntity, expect100continue, closeAfterResponseCompletion)
+                  case OptionVal.None =>
+                    RequestStart(method, uri, protocol, attributes, allHeaders0, createEntity, expect100continue,
+                      closeAfterResponseCompletion)
+                }
+              } else RequestStart(method, uri, protocol, attributes, allHeaders0, createEntity, expect100continue,
+                closeAfterResponseCompletion)
+
+            emit(requestStart)
           }
-        } else {
-          if (!method.isEntityAccepted) {
-            failMessageStart(UnprocessableEntity, s"${method.name} requests must not have an entity")
+
+          if (!isChunked) {
+            val contentLength = clh match {
+              case Some(`Content-Length`(len)) => len
+              case None                        => 0
+            }
+            if (contentLength == 0) {
+              emitRequestStart(emptyEntity(cth))
+              setCompletionHandling(HttpMessageParser.CompletionOk)
+              startNewMessage(input, bodyStart)
+            } else if (!method.isEntityAccepted) {
+              failMessageStart(UnprocessableEntity, s"${method.name} requests must not have an entity")
+            } else if (contentLength <= input.size - bodyStart) {
+              val cl = contentLength.toInt
+              emitRequestStart(strictEntity(cth, input, bodyStart, cl))
+              setCompletionHandling(HttpMessageParser.CompletionOk)
+              startNewMessage(input, bodyStart + cl)
+            } else {
+              emitRequestStart(defaultEntity(cth, contentLength))
+              parseFixedLengthBody(contentLength, closeAfterResponseCompletion)(input, bodyStart)
+            }
           } else {
-            if (clh.isEmpty) {
-              emitRequestStart(chunkedEntity(cth), headers)
-              parseChunk(input, bodyStart, closeAfterResponseCompletion, totalBytesRead = 0L)
-            } else failMessageStart("A chunked request must not contain a Content-Length header")
+            if (!method.isEntityAccepted) {
+              failMessageStart(UnprocessableEntity, s"${method.name} requests must not have an entity")
+            } else {
+              if (clh.isEmpty) {
+                emitRequestStart(chunkedEntity(cth), headers)
+                parseChunk(input, bodyStart, closeAfterResponseCompletion, totalBytesRead = 0L)
+              } else failMessageStart("A chunked request must not contain a Content-Length header")
+            }
           }
-        }
-      } else failMessageStart("Request is missing required `Host` header")
+        } else failMessageStart("Request is missing required `Host` header")
 
-    private def remoteAddressStr: String =
-      inheritedAttributes.get[HttpAttributes.RemoteAddress].map(_.address) match {
-        case Some(addr) => s" from ${addr.getHostString}:${addr.getPort}"
-        case None       => ""
-      }
-  }
+      private def remoteAddressStr: String =
+        inheritedAttributes.get[HttpAttributes.RemoteAddress].map(_.address) match {
+          case Some(addr) => s" from ${addr.getHostString}:${addr.getPort}"
+          case None       => ""
+        }
+    }
 
   override def toString: String = "HttpRequestParser"
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpResponseParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpResponseParser.scala
index c0e651b71..2e2a7c1a8 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpResponseParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpResponseParser.scala
@@ -22,8 +22,9 @@ import akka.stream.scaladsl.Source
  * INTERNAL API
  */
 @InternalApi
-private[http] class HttpResponseParser(protected val settings: ParserSettings, protected val headerParser: HttpHeaderParser)
-  extends HttpMessageParser[ResponseOutput] { self =>
+private[http] class HttpResponseParser(protected val settings: ParserSettings,
+    protected val headerParser: HttpHeaderParser)
+    extends HttpMessageParser[ResponseOutput] { self =>
   import HttpResponseParser._
   import HttpMessageParser._
   import settings._
@@ -77,19 +78,19 @@ private[http] class HttpResponseParser(protected val settings: ParserSettings, p
       statusCode = code match {
         case 200 => StatusCodes.OK
         case code => StatusCodes.getForKey(code) match {
-          case Some(x) => x
-          case None => customStatusCodes(code) getOrElse {
-            // A client must understand the class of any status code, as indicated by the first digit, and
-            // treat an unrecognized status code as being equivalent to the x00 status code of that class
-            // https://tools.ietf.org/html/rfc7231#section-6
-            try {
-              val reason = asciiString(input, reasonStartIdx, reasonEndIdx)
-              StatusCodes.custom(code, reason)
-            } catch {
-              case NonFatal(_) => badStatusCodeSpecific(code)
-            }
+            case Some(x) => x
+            case None => customStatusCodes(code).getOrElse {
+                // A client must understand the class of any status code, as indicated by the first digit, and
+                // treat an unrecognized status code as being equivalent to the x00 status code of that class
+                // https://tools.ietf.org/html/rfc7231#section-6
+                try {
+                  val reason = asciiString(input, reasonStartIdx, reasonEndIdx)
+                  StatusCodes.custom(code, reason)
+                } catch {
+                  case NonFatal(_) => badStatusCodeSpecific(code)
+                }
+              }
           }
-        }
       }
     }
 
@@ -126,13 +127,13 @@ private[http] class HttpResponseParser(protected val settings: ParserSettings, p
 
   // http://tools.ietf.org/html/rfc7230#section-3.3
   protected final def parseEntity(headers: List[HttpHeader], protocol: HttpProtocol, input: ByteString, bodyStart: Int,
-                                  clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
-                                  expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
-                                  sslSession: SSLSession): StateResult = {
+      clh: Option[`Content-Length`], cth: Option[`Content-Type`], isChunked: Boolean,
+      expect100continue: Boolean, hostHeaderPresent: Boolean, closeAfterResponseCompletion: Boolean,
+      sslSession: SSLSession): StateResult = {
 
     def emitResponseStart(
-      createEntity: EntityCreator[ResponseOutput, ResponseEntity],
-      headers:      List[HttpHeader]                              = headers) = {
+        createEntity: EntityCreator[ResponseOutput, ResponseEntity],
+        headers: List[HttpHeader] = headers) = {
 
       val attributes: Map[AttributeKey[_], Any] =
         if (settings.includeSslSessionAttribute) Map(AttributeKeys.sslSession -> SslSessionInfo(sslSession))
@@ -172,15 +173,15 @@ private[http] class HttpResponseParser(protected val settings: ParserSettings, p
     if (statusCode.allowsEntity) {
       contextForCurrentResponse.get.requestMethod match {
         case HttpMethods.HEAD => clh match {
-          case Some(`Content-Length`(contentLength)) if contentLength > 0 =>
-            emitResponseStart {
-              StrictEntityCreator(HttpEntity.Default(contentType(cth), contentLength, Source.empty))
-            }
-            setCompletionHandling(HttpMessageParser.CompletionOk)
-            emit(MessageEnd)
-            startNewMessage(input, bodyStart)
-          case _ => finishEmptyResponse()
-        }
+            case Some(`Content-Length`(contentLength)) if contentLength > 0 =>
+              emitResponseStart {
+                StrictEntityCreator(HttpEntity.Default(contentType(cth), contentLength, Source.empty))
+              }
+              setCompletionHandling(HttpMessageParser.CompletionOk)
+              emit(MessageEnd)
+              startNewMessage(input, bodyStart)
+            case _ => finishEmptyResponse()
+          }
         case HttpMethods.CONNECT =>
           finishEmptyResponse()
         case _ =>
@@ -227,6 +228,7 @@ private[http] class HttpResponseParser(protected val settings: ParserSettings, p
 }
 
 private[http] object HttpResponseParser {
+
   /**
    * @param requestMethod the request's HTTP method
    * @param oneHundredContinueTrigger if the request contains an `Expect: 100-continue` header this option contains
@@ -234,10 +236,10 @@ private[http] object HttpResponseParser {
    *                                  request entity or the closing of the connection (for error completion)
    */
   private[http] final case class ResponseContext(
-    requestMethod:             HttpMethod,
-    oneHundredContinueTrigger: Option[Promise[Unit]])
+      requestMethod: HttpMethod,
+      oneHundredContinueTrigger: Option[Promise[Unit]])
 
   private[http] object OneHundredContinueError
-    extends RuntimeException("Received error response for request with `Expect: 100-continue` header")
-    with NoStackTrace
+      extends RuntimeException("Received error response for request with `Expect: 100-continue` header")
+      with NoStackTrace
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/ParserOutput.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/ParserOutput.scala
index c05709bb5..28b224c81 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/ParserOutput.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/ParserOutput.scala
@@ -29,22 +29,22 @@ private[http] object ParserOutput {
   sealed trait ErrorOutput extends MessageOutput
 
   final case class RequestStart(
-    method:            HttpMethod,
-    uri:               Uri,
-    protocol:          HttpProtocol,
-    attributes:        Map[AttributeKey[_], _],
-    headers:           List[HttpHeader],
-    createEntity:      EntityCreator[RequestOutput, RequestEntity],
-    expect100Continue: Boolean,
-    closeRequested:    Boolean) extends MessageStart with RequestOutput
+      method: HttpMethod,
+      uri: Uri,
+      protocol: HttpProtocol,
+      attributes: Map[AttributeKey[_], _],
+      headers: List[HttpHeader],
+      createEntity: EntityCreator[RequestOutput, RequestEntity],
+      expect100Continue: Boolean,
+      closeRequested: Boolean) extends MessageStart with RequestOutput
 
   final case class ResponseStart(
-    statusCode:     StatusCode,
-    protocol:       HttpProtocol,
-    attributes:     Map[AttributeKey[_], _],
-    headers:        List[HttpHeader],
-    createEntity:   EntityCreator[ResponseOutput, ResponseEntity],
-    closeRequested: Boolean) extends MessageStart with ResponseOutput
+      statusCode: StatusCode,
+      protocol: HttpProtocol,
+      attributes: Map[AttributeKey[_], _],
+      headers: List[HttpHeader],
+      createEntity: EntityCreator[ResponseOutput, ResponseEntity],
+      closeRequested: Boolean) extends MessageStart with ResponseOutput
 
   case object MessageEnd extends MessageOutput
 
@@ -73,7 +73,8 @@ private[http] object ParserOutput {
   /**
    * An entity creator that uses the given entity directly and ignores the passed-in source.
    */
-  final case class StrictEntityCreator[-A <: ParserOutput, +B <: UniversalEntity](entity: B) extends EntityCreator[A, B] {
+  final case class StrictEntityCreator[-A <: ParserOutput, +B <: UniversalEntity](
+      entity: B) extends EntityCreator[A, B] {
     def apply(parts: Source[A, NotUsed]) = {
       // We might need to drain stray empty tail streams which will be read by no one.
       StreamUtils.cancelSource(parts)(StreamUtils.OnlyRunInGraphInterpreterContext) // only called within Http graphs stages
@@ -85,7 +86,7 @@ private[http] object ParserOutput {
    * An entity creator that creates the entity from the a source of parts.
    */
   final case class StreamedEntityCreator[-A <: ParserOutput, +B <: HttpEntity](creator: Source[A, NotUsed] => B)
-    extends EntityCreator[A, B] {
+      extends EntityCreator[A, B] {
     def apply(parts: Source[A, NotUsed]) = creator(parts)
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/SpecializedHeaderValueParsers.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/SpecializedHeaderValueParsers.scala
index db634d213..a6486378b 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/SpecializedHeaderValueParsers.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/SpecializedHeaderValueParsers.scala
@@ -22,7 +22,8 @@ private[parsing] object SpecializedHeaderValueParsers {
   def specializedHeaderValueParsers = Seq(ContentLengthParser)
 
   object ContentLengthParser extends HeaderValueParser("Content-Length", maxValueCount = 1) {
-    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit): (HttpHeader, Int) = {
+    def apply(hhp: HttpHeaderParser, input: ByteString, valueStart: Int, onIllegalHeader: ErrorInfo => Unit)
+        : (HttpHeader, Int) = {
       @tailrec def recurse(ix: Int = valueStart, result: Long = 0): (HttpHeader, Int) = {
         val c = byteChar(input, ix)
         if (result < 0) fail("`Content-Length` header value must not exceed 63-bit integer range")
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/package.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/package.scala
index f677016c5..502b2fccd 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/package.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/package.scala
@@ -22,11 +22,11 @@ package object parsing {
     case '\t'                           => "\\t"
     case '\r'                           => "\\r"
     case '\n'                           => "\\n"
-    case x if Character.isISOControl(x) => "\\u%04x" format c.toInt
+    case x if Character.isISOControl(x) => "\\u%04x".format(c.toInt)
     case x                              => x.toString
   }
 
-  private[http] def byteChar(input: ByteString, ix: Int): Char = (byteAt(input, ix) & 0xff).toChar
+  private[http] def byteChar(input: ByteString, ix: Int): Char = (byteAt(input, ix) & 0xFF).toChar
 
   private[http] def byteAt(input: ByteString, ix: Int): Byte =
     if (ix < input.length) input(ix) else throw NotEnoughDataException
@@ -38,8 +38,8 @@ package object parsing {
   }
 
   private[http] def logParsingError(info: ErrorInfo, log: LoggingAdapter,
-                                    settings:          ParserSettings.ErrorLoggingVerbosity,
-                                    ignoreHeaderNames: Set[String]                          = Set.empty): Unit =
+      settings: ParserSettings.ErrorLoggingVerbosity,
+      ignoreHeaderNames: Set[String] = Set.empty): Unit =
     settings match {
       case ParserSettings.ErrorLoggingVerbosity.Off => // nothing to do
       case ParserSettings.ErrorLoggingVerbosity.Simple =>
@@ -60,8 +60,8 @@ package parsing {
    */
   @InternalApi
   private[parsing] class ParsingException(
-    val status: StatusCode,
-    val info:   ErrorInfo) extends RuntimeException(info.formatPretty) {
+      val status: StatusCode,
+      val info: ErrorInfo) extends RuntimeException(info.formatPretty) {
     def this(status: StatusCode, summary: String) =
       this(status, ErrorInfo(if (summary.isEmpty) status.defaultMessage else summary))
     def this(summary: String) =
@@ -76,4 +76,3 @@ package parsing {
   @InternalApi
   private[parsing] object NotEnoughDataException extends SingletonException
 }
-
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala
index 23765903f..78b8ffcc1 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala
@@ -28,9 +28,9 @@ import akka.annotation.InternalApi
 private[http] object BodyPartRenderer {
 
   def streamed(
-    boundary:            String,
-    partHeadersSizeHint: Int,
-    _log:                LoggingAdapter): GraphStage[FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]]] =
+      boundary: String,
+      partHeadersSizeHint: Int,
+      _log: LoggingAdapter): GraphStage[FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]]] =
     new GraphStage[FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]]] {
       var firstBoundaryRendered = false
 
@@ -47,7 +47,7 @@ private[http] object BodyPartRenderer {
 
             def bodyPartChunks(data: Source[ByteString, Any]): Source[ChunkStreamPart, Any] = {
               val entityChunks = data.map[ChunkStreamPart](Chunk(_))
-              (chunkStream(r.get) ++ entityChunks).mapMaterializedValue((_) => ())
+              (chunkStream(r.get) ++ entityChunks).mapMaterializedValue(_ => ())
             }
 
             def completePartRendering(entity: HttpEntity): Source[ChunkStreamPart, Any] =
@@ -93,7 +93,7 @@ private[http] object BodyPartRenderer {
     }
 
   def strict(parts: immutable.Seq[Multipart.BodyPart.Strict], boundary: String,
-             partHeadersSizeHint: Int, log: LoggingAdapter): ByteString = {
+      partHeadersSizeHint: Int, log: LoggingAdapter): ByteString = {
     val r = new ByteStringRendering(partHeadersSizeHint)
     if (parts.nonEmpty) {
       for (part <- parts) {
@@ -116,18 +116,20 @@ private[http] object BodyPartRenderer {
     r ~~ CrLf ~~ '-' ~~ '-' ~~ boundary ~~ '-' ~~ '-'
 
   private def renderHeaders(r: Rendering, headers: immutable.Seq[HttpHeader], log: LoggingAdapter): Unit = {
-    headers foreach renderHeader(r, log)
+    headers.foreach(renderHeader(r, log))
     r ~~ CrLf
   }
 
   private def renderHeader(r: Rendering, log: LoggingAdapter): HttpHeader => Unit = {
     case x: `Content-Length` =>
-      suppressionWarning(log, x, "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
+      suppressionWarning(log, x,
+        "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
 
     case x: `Content-Type` =>
-      suppressionWarning(log, x, "explicit `Content-Type` header is not allowed. Set `HttpRequest.entity.contentType` instead.")
+      suppressionWarning(log, x,
+        "explicit `Content-Type` header is not allowed. Set `HttpRequest.entity.contentType` instead.")
 
-    case x: RawHeader if (x is "content-type") || (x is "content-length") =>
+    case x: RawHeader if (x.is("content-type")) || (x.is("content-length")) =>
       suppressionWarning(log, x, "illegal RawHeader")
 
     case x => r ~~ x
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/DateHeaderRendering.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/DateHeaderRendering.scala
index 3014691e3..0c0363a7c 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/DateHeaderRendering.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/DateHeaderRendering.scala
@@ -8,7 +8,7 @@ import akka.actor.{ ClassicActorSystemProvider, Scheduler }
 import akka.annotation.InternalApi
 import akka.http.impl.util.Rendering.CrLf
 import akka.http.impl.util.{ ByteArrayRendering, StringRendering }
-import akka.http.scaladsl.model.{ DateTime, headers }
+import akka.http.scaladsl.model.{ headers, DateTime }
 
 import java.util.concurrent.atomic.AtomicReference
 import scala.concurrent.duration._
@@ -23,7 +23,8 @@ import scala.concurrent.ExecutionContext
 
 /** INTERNAL API */
 @InternalApi private[http] object DateHeaderRendering {
-  def apply(now: () => Long = () => System.currentTimeMillis())(implicit system: ClassicActorSystemProvider): DateHeaderRendering =
+  def apply(now: () => Long = () => System.currentTimeMillis())(
+      implicit system: ClassicActorSystemProvider): DateHeaderRendering =
     apply(system.classicSystem.scheduler, now)(system.classicSystem.dispatcher)
 
   def apply(scheduler: Scheduler, now: () => Long)(implicit ec: ExecutionContext): DateHeaderRendering = {
@@ -31,6 +32,7 @@ import scala.concurrent.ExecutionContext
       DateTime(now()).renderRfc1123DateTimeString(new StringRendering).get
 
     sealed trait DateState
+
     /** Date has not been used for a while */
     case object Idle extends DateState
     case class AutoUpdated(value: String) extends DateState {
@@ -56,7 +58,8 @@ import scala.concurrent.ExecutionContext
             scheduleAutoUpdate()
           } else
             dateState.set(Idle) // wasn't retrieved, no reason to continue autoupdating
-        case Idle => new IllegalStateException("Should not happen, invariant is either state == Idle or scheduled both never both")
+        case Idle =>
+          new IllegalStateException("Should not happen, invariant is either state == Idle or scheduled both never both")
       }
 
     def get(rendered: String): AutoUpdated =
@@ -86,8 +89,11 @@ import scala.concurrent.ExecutionContext
   }
 
   val Unavailable = new DateHeaderRendering {
-    override def renderHeaderPair(): (String, String) = throw new IllegalStateException("DateHeaderRendering is not available here")
-    override def renderHeaderBytes(): Array[Byte] = throw new IllegalStateException("DateHeaderRendering is not available here")
-    override def renderHeaderValue(): String = throw new IllegalStateException("DateHeaderRendering is not available here")
+    override def renderHeaderPair(): (String, String) =
+      throw new IllegalStateException("DateHeaderRendering is not available here")
+    override def renderHeaderBytes(): Array[Byte] =
+      throw new IllegalStateException("DateHeaderRendering is not available here")
+    override def renderHeaderValue(): String =
+      throw new IllegalStateException("DateHeaderRendering is not available here")
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpRequestRendererFactory.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpRequestRendererFactory.scala
index 980c9d690..a1c16d1a7 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpRequestRendererFactory.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpRequestRendererFactory.scala
@@ -25,9 +25,9 @@ import headers._
  */
 @InternalApi
 private[http] class HttpRequestRendererFactory(
-  userAgentHeader:       Option[headers.`User-Agent`],
-  requestHeaderSizeHint: Int,
-  log:                   LoggingAdapter) {
+    userAgentHeader: Option[headers.`User-Agent`],
+    requestHeaderSizeHint: Int,
+    log: LoggingAdapter) {
   import HttpRequestRendererFactory.RequestRenderingOutput
 
   def renderToSource(ctx: RequestRenderingContext): Source[ByteString, Any] = render(ctx).byteStream
@@ -50,61 +50,64 @@ private[http] class HttpRequestRendererFactory(
     def render(h: HttpHeader) = r ~~ h
 
     @tailrec def renderHeaders(remaining: List[HttpHeader], hostHeaderSeen: Boolean = false,
-                               userAgentSeen: Boolean = false, transferEncodingSeen: Boolean = false): Unit =
+        userAgentSeen: Boolean = false, transferEncodingSeen: Boolean = false): Unit =
       remaining match {
         case head :: tail => head match {
-          case x: `Content-Length` =>
-            suppressionWarning(log, x, "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-
-          case x: `Content-Type` =>
-            suppressionWarning(log, x, "explicit `Content-Type` header is not allowed. Set `HttpRequest.entity.contentType` instead.")
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-
-          case x: `Transfer-Encoding` =>
-            x.withChunkedPeeled match {
-              case None =>
-                suppressionWarning(log, head)
-                renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-              case Some(te) =>
-                // if the user applied some custom transfer-encoding we need to keep the header
-                render(if (entity.isChunked && !entity.isKnownEmpty) te.withChunked else te)
-                renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen = true)
-            }
-
-          case x: `Host` =>
-            render(x)
-            renderHeaders(tail, hostHeaderSeen = true, userAgentSeen, transferEncodingSeen)
-
-          case x: `User-Agent` =>
-            render(x)
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen = true, transferEncodingSeen)
-
-          case x: `Raw-Request-URI` => // we never render this header
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-
-          case x: CustomHeader =>
-            if (x.renderInRequests) render(x)
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-
-          case x: RawHeader if (x is "content-type") || (x is "content-length") ||
-            (x is "transfer-encoding") =>
-            suppressionWarning(log, x, "illegal RawHeader")
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-
-          case x: RawHeader if x is "user-agent" =>
-            render(x)
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen = true, transferEncodingSeen)
-
-          case x: RawHeader if x is "host" =>
-            render(x)
-            renderHeaders(tail, hostHeaderSeen = true, userAgentSeen, transferEncodingSeen)
-
-          case x =>
-            if (x.renderInRequests) render(x)
-            else log.warning("HTTP header '{}' is not allowed in requests", x)
-            renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
-        }
+            case x: `Content-Length` =>
+              suppressionWarning(log, x,
+                "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+
+            case x: `Content-Type` =>
+              suppressionWarning(log, x,
+                "explicit `Content-Type` header is not allowed. Set `HttpRequest.entity.contentType` instead.")
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+
+            case x: `Transfer-Encoding` =>
+              x.withChunkedPeeled match {
+                case None =>
+                  suppressionWarning(log, head)
+                  renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+                case Some(te) =>
+                  // if the user applied some custom transfer-encoding we need to keep the header
+                  render(if (entity.isChunked && !entity.isKnownEmpty) te.withChunked else te)
+                  renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen = true)
+              }
+
+            case x: `Host` =>
+              render(x)
+              renderHeaders(tail, hostHeaderSeen = true, userAgentSeen, transferEncodingSeen)
+
+            case x: `User-Agent` =>
+              render(x)
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen = true, transferEncodingSeen)
+
+            case x: `Raw-Request-URI` => // we never render this header
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+
+            case x: CustomHeader =>
+              if (x.renderInRequests) render(x)
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+
+            case x: RawHeader
+                if (x.is("content-type")) || (x.is("content-length")) ||
+                (x.is("transfer-encoding")) =>
+              suppressionWarning(log, x, "illegal RawHeader")
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+
+            case x: RawHeader if x.is("user-agent") =>
+              render(x)
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen = true, transferEncodingSeen)
+
+            case x: RawHeader if x.is("host") =>
+              render(x)
+              renderHeaders(tail, hostHeaderSeen = true, userAgentSeen, transferEncodingSeen)
+
+            case x =>
+              if (x.renderInRequests) render(x)
+              else log.warning("HTTP header '{}' is not allowed in requests", x)
+              renderHeaders(tail, hostHeaderSeen, userAgentSeen, transferEncodingSeen)
+          }
 
         case Nil =>
           if (!hostHeaderSeen) r ~~ ctx.hostHeader
@@ -114,7 +117,9 @@ private[http] class HttpRequestRendererFactory(
       }
 
     def renderContentLength(contentLength: Long) =
-      if (method.isEntityAccepted && (contentLength > 0 || method.requestEntityAcceptance == Expected)) r ~~ `Content-Length` ~~ contentLength ~~ CrLf else r
+      if (method.isEntityAccepted && (contentLength > 0 || method.requestEntityAcceptance == Expected))
+        r ~~ `Content-Length` ~~ contentLength ~~ CrLf
+      else r
 
     def renderStreamed(body: Source[ByteString, Any]): RequestRenderingOutput = {
       val headerPart = Source.single(r.get)
@@ -122,7 +127,8 @@ private[http] class HttpRequestRendererFactory(
         case None => headerPart ++ body
         case Some(future) =>
           val barrier = Source.fromFuture(future).drop(1).asInstanceOf[Source[ByteString, Any]]
-          (headerPart ++ barrier ++ body).recoverWithRetries(-1, { case HttpResponseParser.OneHundredContinueError => Source.empty })
+          (headerPart ++ barrier ++ body).recoverWithRetries(-1,
+            { case HttpResponseParser.OneHundredContinueError => Source.empty })
       }
       RequestRenderingOutput.Streamed(stream)
     }
@@ -157,7 +163,8 @@ private[http] class HttpRequestRendererFactory(
     render(ctx) match {
       case RequestRenderingOutput.Strict(bytes) => bytes
       case _: RequestRenderingOutput.Streamed =>
-        throw new IllegalArgumentException(s"Request entity was not Strict but ${ctx.request.entity.getClass.getSimpleName}")
+        throw new IllegalArgumentException(
+          s"Request entity was not Strict but ${ctx.request.entity.getClass.getSimpleName}")
     }
 }
 
@@ -187,6 +194,6 @@ private[http] object HttpRequestRendererFactory {
  */
 @InternalApi
 private[http] final case class RequestRenderingContext(
-  request:           HttpRequest,
-  hostHeader:        Host,
-  sendEntityTrigger: Option[Future[NotUsed]] = None)
+    request: HttpRequest,
+    hostHeader: Host,
+    sendEntityTrigger: Option[Future[NotUsed]] = None)
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpResponseRendererFactory.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpResponseRendererFactory.scala
index a65ae2c65..40b654462 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpResponseRendererFactory.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/HttpResponseRendererFactory.scala
@@ -30,10 +30,10 @@ import scala.util.control.NonFatal
  */
 @InternalApi
 private[http] class HttpResponseRendererFactory(
-  serverHeader:           Option[headers.Server],
-  responseHeaderSizeHint: Int,
-  log:                    LoggingAdapter,
-  dateHeaderRendering:    DateHeaderRendering) {
+    serverHeader: Option[headers.Server],
+    responseHeaderSizeHint: Int,
+    log: LoggingAdapter,
+    dateHeaderRendering: DateHeaderRendering) {
 
   private val renderDefaultServerHeader: Rendering => Unit =
     serverHeader match {
@@ -58,30 +58,34 @@ private[http] class HttpResponseRendererFactory(
         var transferSink: Option[SubSinkInlet[ByteString]] = None
         def transferring: Boolean = transferSink.isDefined
 
-        setHandler(in, new InHandler {
-          override def onPush(): Unit =
-            render(grab(in)) match {
-              case Strict(outElement) =>
-                push(out, outElement)
-                if (close) completeStage()
-              case HeadersAndStreamedEntity(headerData, outStream) =>
-                try transfer(headerData, outStream)
-                catch {
-                  case NonFatal(e) =>
-                    log.error(e, s"Rendering of response failed because response entity stream materialization failed with '${e.getMessage}'. Sending out 500 response instead.")
-                    push(out, render(ResponseRenderingContext(HttpResponse(500, entity = StatusCodes.InternalServerError.defaultMessage))).asInstanceOf[Strict].bytes)
-                }
-            }
+        setHandler(in,
+          new InHandler {
+            override def onPush(): Unit =
+              render(grab(in)) match {
+                case Strict(outElement) =>
+                  push(out, outElement)
+                  if (close) completeStage()
+                case HeadersAndStreamedEntity(headerData, outStream) =>
+                  try transfer(headerData, outStream)
+                  catch {
+                    case NonFatal(e) =>
+                      log.error(e,
+                        s"Rendering of response failed because response entity stream materialization failed with '${e.getMessage}'. Sending out 500 response instead.")
+                      push(out,
+                        render(ResponseRenderingContext(HttpResponse(500,
+                          entity = StatusCodes.InternalServerError.defaultMessage))).asInstanceOf[Strict].bytes)
+                  }
+              }
 
-          override def onUpstreamFinish(): Unit =
-            if (transferring) closeMode = CloseConnection
-            else completeStage()
+            override def onUpstreamFinish(): Unit =
+              if (transferring) closeMode = CloseConnection
+              else completeStage()
 
-          override def onUpstreamFailure(ex: Throwable): Unit = {
-            stopTransfer()
-            failStage(ex)
-          }
-        })
+            override def onUpstreamFailure(ex: Throwable): Unit = {
+              stopTransfer()
+              failStage(ex)
+            }
+          })
         private val waitForDemandHandler = new OutHandler {
           def onPull(): Unit = if (!hasBeenPulled(in)) tryPull(in)
         }
@@ -109,15 +113,16 @@ private[http] class HttpResponseRendererFactory(
             push(out, ResponseRenderingOutput.HttpData(headerData))
             headersSent = true
           }
-          setHandler(out, new OutHandler {
-            override def onPull(): Unit =
-              if (!headersSent) sendHeaders()
-              else sinkIn.pull()
-            override def onDownstreamFinish(): Unit = {
-              completeStage()
-              stopTransfer()
-            }
-          })
+          setHandler(out,
+            new OutHandler {
+              override def onPull(): Unit =
+                if (!headersSent) sendHeaders()
+                else sinkIn.pull()
+              override def onDownstreamFinish(): Unit = {
+                completeStage()
+                stopTransfer()
+              }
+            })
 
           try {
             outStream.runWith(sinkIn.sink)(interpreter.subFusingMaterializer)
@@ -137,7 +142,8 @@ private[http] class HttpResponseRendererFactory(
 
           def renderStatusLine(): Unit =
             protocol match {
-              case `HTTP/1.1` => if (status eq StatusCodes.OK) r ~~ DefaultStatusLineBytes else r ~~ StatusLineStartBytes ~~ status ~~ CrLf
+              case `HTTP/1.1` => if (status eq StatusCodes.OK) r ~~ DefaultStatusLineBytes
+                else r ~~ StatusLineStartBytes ~~ status ~~ CrLf
               case `HTTP/1.0` => r ~~ protocol ~~ ' ' ~~ status ~~ CrLf
               case other      => throw new IllegalStateException(s"Unexpected protocol '$other'")
             }
@@ -165,10 +171,12 @@ private[http] class HttpResponseRendererFactory(
                   dateSeen = true
 
                 case x: `Content-Length` =>
-                  suppressionWarning(log, x, "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
+                  suppressionWarning(log, x,
+                    "explicit `Content-Length` header is not allowed. Use the appropriate HttpEntity subtype.")
 
                 case x: `Content-Type` =>
-                  suppressionWarning(log, x, "explicit `Content-Type` header is not allowed. Set `HttpResponse.entity.contentType` instead.")
+                  suppressionWarning(log, x,
+                    "explicit `Content-Type` header is not allowed. Set `HttpResponse.entity.contentType` instead.")
 
                 case x: `Transfer-Encoding` =>
                   x.withChunkedPeeled match {
@@ -186,8 +194,9 @@ private[http] class HttpResponseRendererFactory(
                 case x: CustomHeader =>
                   if (x.renderInResponses) render(x)
 
-                case x: RawHeader if (x is "content-type") || (x is "content-length") || (x is "transfer-encoding") ||
-                  (x is "date") || (x is "server") || (x is "connection") =>
+                case x: RawHeader
+                    if (x.is("content-type")) || (x.is("content-length")) || (x.is("transfer-encoding")) ||
+                    (x.is("date")) || (x.is("server")) || (x.is("connection")) =>
                   suppressionWarning(log, x, "illegal RawHeader")
 
                 case x =>
@@ -202,23 +211,24 @@ private[http] class HttpResponseRendererFactory(
             closeIf {
               // if we are prohibited to keep-alive by the spec
               alwaysClose ||
-                // if the controller asked for closing (error, early response, etc. overrides anything
-                ctx.closeRequested.wasForced ||
-                // if the client wants to close and the response doesn't override
-                (ctx.closeRequested.shouldClose && ((connHeader eq null) || !connHeader.hasKeepAlive)) ||
-                // if the application wants to close explicitly
-                (protocol match {
-                  case `HTTP/1.1` => (connHeader ne null) && connHeader.hasClose
-                  case `HTTP/1.0` => if (connHeader eq null) ctx.requestProtocol == `HTTP/1.1` else !connHeader.hasKeepAlive
-                  case other      => throw new IllegalStateException(s"Unexpected protocol '$other'")
-                })
+              // if the controller asked for closing (error, early response, etc. overrides anything
+              ctx.closeRequested.wasForced ||
+              // if the client wants to close and the response doesn't override
+              (ctx.closeRequested.shouldClose && ((connHeader eq null) || !connHeader.hasKeepAlive)) ||
+              // if the application wants to close explicitly
+              (protocol match {
+                case `HTTP/1.1` => (connHeader ne null) && connHeader.hasClose
+                case `HTTP/1.0` =>
+                  if (connHeader eq null) ctx.requestProtocol == `HTTP/1.1` else !connHeader.hasKeepAlive
+                case other => throw new IllegalStateException(s"Unexpected protocol '$other'")
+              })
             }
 
             // Do we render an explicit Connection header?
             val renderConnectionHeader =
               protocol == `HTTP/1.0` && !close || protocol == `HTTP/1.1` && close || // if we don't follow the default behavior
-                close != ctx.closeRequested.shouldClose || // if we override the client's closing request
-                protocol != ctx.requestProtocol // if we reply with a mismatching protocol (let's be very explicit in this case)
+              close != ctx.closeRequested.shouldClose || // if we override the client's closing request
+              protocol != ctx.requestProtocol // if we reply with a mismatching protocol (let's be very explicit in this case)
 
             if (renderConnectionHeader)
               r ~~ Connection ~~ (if (close) CloseBytes else KeepAliveBytes) ~~ CrLf
@@ -243,8 +253,7 @@ private[http] class HttpResponseRendererFactory(
             } else {
               HeadersAndStreamedEntity(
                 r.asByteString,
-                entityBytes
-              )
+                entityBytes)
             }
 
           @tailrec def completeResponseRendering(entity: ResponseEntity): StrictOrStreamed =
@@ -264,8 +273,9 @@ private[http] class HttpResponseRendererFactory(
 
                 Strict {
                   closeMode match {
-                    case SwitchToOtherProtocol(handler) => ResponseRenderingOutput.SwitchToOtherProtocol(finalBytes, handler)
-                    case _                              => ResponseRenderingOutput.HttpData(finalBytes)
+                    case SwitchToOtherProtocol(handler) =>
+                      ResponseRenderingOutput.SwitchToOtherProtocol(finalBytes, handler)
+                    case _ => ResponseRenderingOutput.HttpData(finalBytes)
                   }
                 }
 
@@ -297,7 +307,8 @@ private[http] class HttpResponseRendererFactory(
 
     sealed trait StrictOrStreamed
     case class Strict(bytes: ResponseRenderingOutput) extends StrictOrStreamed
-    case class HeadersAndStreamedEntity(headerBytes: ByteString, remainingData: Source[ByteString, Any]) extends StrictOrStreamed
+    case class HeadersAndStreamedEntity(headerBytes: ByteString, remainingData: Source[ByteString, Any])
+        extends StrictOrStreamed
   }
 
   sealed trait CloseMode
@@ -311,10 +322,10 @@ private[http] class HttpResponseRendererFactory(
  */
 @InternalApi
 private[http] final case class ResponseRenderingContext(
-  response:        HttpResponse,
-  requestMethod:   HttpMethod     = HttpMethods.GET,
-  requestProtocol: HttpProtocol   = HttpProtocols.`HTTP/1.1`,
-  closeRequested:  CloseRequested = CloseRequested.Unspecified)
+    response: HttpResponse,
+    requestMethod: HttpMethod = HttpMethods.GET,
+    requestProtocol: HttpProtocol = HttpProtocols.`HTTP/1.1`,
+    closeRequested: CloseRequested = CloseRequested.Unspecified)
 
 /**
  * INTERNAL API
@@ -344,9 +355,11 @@ private[http] object ResponseRenderingContext {
 /** INTERNAL API */
 @InternalApi
 private[http] sealed trait ResponseRenderingOutput
+
 /** INTERNAL API */
 @InternalApi
 private[http] object ResponseRenderingOutput {
   private[http] case class HttpData(bytes: ByteString) extends ResponseRenderingOutput
-  private[http] case class SwitchToOtherProtocol(httpResponseBytes: ByteString, newHandler: Flow[ByteString, ByteString, Any]) extends ResponseRenderingOutput
+  private[http] case class SwitchToOtherProtocol(httpResponseBytes: ByteString,
+      newHandler: Flow[ByteString, ByteString, Any]) extends ResponseRenderingOutput
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala
index 577294445..5f1badb5b 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala
@@ -113,13 +113,15 @@ private[http] object RenderSupport {
           if (sent <= length) {
             push(out, elem)
           } else {
-            failStage(InvalidContentLengthException(s"HTTP message had declared Content-Length $length but entity data stream amounts to more bytes"))
+            failStage(InvalidContentLengthException(
+              s"HTTP message had declared Content-Length $length but entity data stream amounts to more bytes"))
           }
         }
 
         override def onUpstreamFinish(): Unit = {
           if (sent < length) {
-            failStage(InvalidContentLengthException(s"HTTP message had declared Content-Length $length but entity data stream amounts to ${length - sent} bytes less"))
+            failStage(InvalidContentLengthException(
+              s"HTTP message had declared Content-Length $length but entity data stream amounts to ${length - sent} bytes less"))
           } else {
             completeStage()
           }
@@ -137,9 +139,9 @@ private[http] object RenderSupport {
     import chunk._
     val renderedSize = // buffer space required for rendering (without trailer)
       CharUtils.numberOfHexDigits(data.length) +
-        (if (extension.isEmpty) 0 else extension.length + 1) +
-        data.length +
-        2 + 2
+      (if (extension.isEmpty) 0 else extension.length + 1) +
+      data.length +
+      2 + 2
     val r = new ByteStringRendering(renderedSize)
     r ~~% data.length
     if (extension.nonEmpty) r ~~ ';' ~~ extension
@@ -154,6 +156,6 @@ private[http] object RenderSupport {
   }
 
   def suppressionWarning(log: LoggingAdapter, h: HttpHeader,
-                         msg: String = "the akka-http-core layer sets this header automatically!"): Unit =
+      msg: String = "the akka-http-core layer sets this header automatically!"): Unit =
     log.warning("Explicitly set HTTP header '{}' is ignored, {}", h, msg)
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala
index 88eea558c..24e55bddc 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala
@@ -25,7 +25,12 @@ import akka.http.scaladsl.settings.ServerSettings
 import akka.http.impl.engine.parsing.ParserOutput._
 import akka.http.impl.engine.parsing._
 import akka.http.impl.engine.rendering.ResponseRenderingContext.CloseRequested
-import akka.http.impl.engine.rendering.{ DateHeaderRendering, HttpResponseRendererFactory, ResponseRenderingContext, ResponseRenderingOutput }
+import akka.http.impl.engine.rendering.{
+  DateHeaderRendering,
+  HttpResponseRendererFactory,
+  ResponseRenderingContext,
+  ResponseRenderingOutput
+}
 import akka.http.impl.util._
 import akka.http.scaladsl.util.FastFuture.EnhancedFuture
 import akka.http.scaladsl.{ Http, TimeoutAccess }
@@ -40,7 +45,6 @@ import scala.util.Failure
 /**
  * INTERNAL API
  *
- *
  * HTTP pipeline setup (without the underlying SSL/TLS (un)wrapping and the websocket switch):
  *
  *                 +----------+          +-------------+          +-------------+             +-----------+
@@ -60,32 +64,39 @@ import scala.util.Failure
  */
 @InternalApi
 private[http] object HttpServerBluePrint {
-  def apply(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean, dateHeaderRendering: DateHeaderRendering): Http.ServerLayer =
-    userHandlerGuard(settings.pipeliningLimit) atop
-      requestTimeoutSupport(settings.timeouts.requestTimeout, log) atop
-      requestPreparation(settings) atop
-      controller(settings, log) atop
-      parsingRendering(settings, log, isSecureConnection, dateHeaderRendering) atop
-      websocketSupport(settings, log) atop
-      tlsSupport atop
-      logTLSBidiBySetting("server-plain-text", settings.logUnencryptedNetworkBytes)
+  def apply(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean,
+      dateHeaderRendering: DateHeaderRendering): Http.ServerLayer =
+    userHandlerGuard(settings.pipeliningLimit).atop(
+      requestTimeoutSupport(settings.timeouts.requestTimeout, log)).atop(
+      requestPreparation(settings)).atop(
+      controller(settings, log)).atop(
+      parsingRendering(settings, log, isSecureConnection, dateHeaderRendering)).atop(
+      websocketSupport(settings, log)).atop(
+      tlsSupport).atop(
+      logTLSBidiBySetting("server-plain-text", settings.logUnencryptedNetworkBytes))
 
   val tlsSupport: BidiFlow[ByteString, SslTlsOutbound, SslTlsInbound, SessionBytes, NotUsed] =
     BidiFlow.fromFlows(Flow[ByteString].map(SendBytes), Flow[SslTlsInbound].collect { case x: SessionBytes => x })
 
-  def websocketSupport(settings: ServerSettings, log: LoggingAdapter): BidiFlow[ResponseRenderingOutput, ByteString, SessionBytes, SessionBytes, NotUsed] =
+  def websocketSupport(settings: ServerSettings, log: LoggingAdapter)
+      : BidiFlow[ResponseRenderingOutput, ByteString, SessionBytes, SessionBytes, NotUsed] =
     BidiFlow.fromGraph(new ProtocolSwitchStage(settings, log))
 
-  def parsingRendering(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean, dateHeaderRendering: DateHeaderRendering): BidiFlow[ResponseRenderingContext, ResponseRenderingOutput, SessionBytes, RequestOutput, NotUsed] =
+  def parsingRendering(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean,
+      dateHeaderRendering: DateHeaderRendering)
+      : BidiFlow[ResponseRenderingContext, ResponseRenderingOutput, SessionBytes, RequestOutput, NotUsed] =
     BidiFlow.fromFlows(rendering(settings, log, dateHeaderRendering), parsing(settings, log, isSecureConnection))
 
-  def controller(settings: ServerSettings, log: LoggingAdapter): BidiFlow[HttpResponse, ResponseRenderingContext, RequestOutput, RequestOutput, NotUsed] =
+  def controller(settings: ServerSettings, log: LoggingAdapter)
+      : BidiFlow[HttpResponse, ResponseRenderingContext, RequestOutput, RequestOutput, NotUsed] =
     BidiFlow.fromGraph(new ControllerStage(settings, log)).reversed
 
-  def requestPreparation(settings: ServerSettings): BidiFlow[HttpResponse, HttpResponse, RequestOutput, HttpRequest, NotUsed] =
+  def requestPreparation(
+      settings: ServerSettings): BidiFlow[HttpResponse, HttpResponse, RequestOutput, HttpRequest, NotUsed] =
     BidiFlow.fromFlows(Flow[HttpResponse], new PrepareRequests(settings))
 
-  def requestTimeoutSupport(timeout: Duration, log: LoggingAdapter): BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, NotUsed] =
+  def requestTimeoutSupport(
+      timeout: Duration, log: LoggingAdapter): BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, NotUsed] =
     if (timeout == Duration.Zero) BidiFlow.identity[HttpResponse, HttpRequest]
     else BidiFlow.fromGraph(new RequestTimeoutSupport(timeout, log)).reversed
 
@@ -100,143 +111,149 @@ private[http] object HttpServerBluePrint {
     val out = Outlet[HttpRequest]("PrepareRequests.out")
     override val shape: FlowShape[RequestOutput, HttpRequest] = FlowShape.of(in, out)
 
-    override def createLogic(inheritedAttributes: Attributes) = new GraphStageLogic(shape) with InHandler with OutHandler {
-      val remoteAddressOpt = inheritedAttributes.get[HttpAttributes.RemoteAddress].map(_.address)
+    override def createLogic(inheritedAttributes: Attributes) =
+      new GraphStageLogic(shape) with InHandler with OutHandler {
+        val remoteAddressOpt = inheritedAttributes.get[HttpAttributes.RemoteAddress].map(_.address)
 
-      var downstreamPullWaiting = false
-      var completionDeferred = false
-      var entitySource: SubSourceOutlet[RequestOutput] = _
+        var downstreamPullWaiting = false
+        var completionDeferred = false
+        var entitySource: SubSourceOutlet[RequestOutput] = _
 
-      // optimization: to avoid allocations the "idle" case in and out handlers are put directly on the GraphStageLogic itself
-      override def onPull(): Unit = {
-        pull(in)
-      }
+        // optimization: to avoid allocations the "idle" case in and out handlers are put directly on the GraphStageLogic itself
+        override def onPull(): Unit = {
+          pull(in)
+        }
 
-      // optimization: this callback is used to handle entity substream cancellation to avoid allocating a dedicated handler
-      override def onDownstreamFinish(): Unit = {
-        if (entitySource ne null) {
-          // application layer has cancelled or only partially consumed response entity:
-          // connection will be closed
-          entitySource.complete()
+        // optimization: this callback is used to handle entity substream cancellation to avoid allocating a dedicated handler
+        override def onDownstreamFinish(): Unit = {
+          if (entitySource ne null) {
+            // application layer has cancelled or only partially consumed response entity:
+            // connection will be closed
+            entitySource.complete()
+          }
+          completeStage()
         }
-        completeStage()
-      }
 
-      override def onUpstreamFinish(): Unit = super.onUpstreamFinish()
-      override def onUpstreamFailure(ex: Throwable): Unit = {
-        if (entitySource ne null) {
-          // application layer has cancelled or only partially consumed response entity:
-          // connection will be closed
-          entitySource.fail(ex)
+        override def onUpstreamFinish(): Unit = super.onUpstreamFinish()
+        override def onUpstreamFailure(ex: Throwable): Unit = {
+          if (entitySource ne null) {
+            // application layer has cancelled or only partially consumed response entity:
+            // connection will be closed
+            entitySource.fail(ex)
+          }
+          super.onUpstreamFailure(ex)
         }
-        super.onUpstreamFailure(ex)
-      }
 
-      override def onPush(): Unit = grab(in) match {
-        case RequestStart(method, uri, protocol, attrs, hdrs, entityCreator, _, _) =>
-          val effectiveMethod = if (method == HttpMethods.HEAD && settings.transparentHeadRequests) HttpMethods.GET else method
+        override def onPush(): Unit = grab(in) match {
+          case RequestStart(method, uri, protocol, attrs, hdrs, entityCreator, _, _) =>
+            val effectiveMethod = if (method == HttpMethods.HEAD && settings.transparentHeadRequests) HttpMethods.GET
+            else method
 
-          @nowarn("msg=use remote-address-attribute instead")
-          val effectiveHeaders =
-            if (settings.remoteAddressHeader && remoteAddressOpt.isDefined)
-              headers.`Remote-Address`(RemoteAddress(remoteAddressOpt.get)) +: hdrs
-            else hdrs
+            @nowarn("msg=use remote-address-attribute instead")
+            val effectiveHeaders =
+              if (settings.remoteAddressHeader && remoteAddressOpt.isDefined)
+                headers.`Remote-Address`(RemoteAddress(remoteAddressOpt.get)) +: hdrs
+              else hdrs
 
-          val entity = createEntity(entityCreator) withSizeLimit settings.parserSettings.maxContentLength
-          val httpRequest = HttpRequest(effectiveMethod, uri, effectiveHeaders, entity, protocol)
-            .withAttributes(attrs)
+            val entity = createEntity(entityCreator).withSizeLimit(settings.parserSettings.maxContentLength)
+            val httpRequest = HttpRequest(effectiveMethod, uri, effectiveHeaders, entity, protocol)
+              .withAttributes(attrs)
 
-          val effectiveHttpRequest = if (settings.remoteAddressAttribute) {
-            remoteAddressOpt.fold(httpRequest) { remoteAddress =>
-              httpRequest.addAttribute(AttributeKeys.remoteAddress, RemoteAddress(remoteAddress))
-            }
-          } else httpRequest
+            val effectiveHttpRequest = if (settings.remoteAddressAttribute) {
+              remoteAddressOpt.fold(httpRequest) { remoteAddress =>
+                httpRequest.addAttribute(AttributeKeys.remoteAddress, RemoteAddress(remoteAddress))
+              }
+            } else httpRequest
 
-          push(out, effectiveHttpRequest)
-        case other =>
-          throw new IllegalStateException(s"unexpected element of type ${other.getClass}")
-      }
+            push(out, effectiveHttpRequest)
+          case other =>
+            throw new IllegalStateException(s"unexpected element of type ${other.getClass}")
+        }
 
-      setIdleHandlers()
+        setIdleHandlers()
 
-      def setIdleHandlers(): Unit = {
-        if (completionDeferred) {
-          completeStage()
-        } else {
-          setHandler(in, this)
-          setHandler(out, this)
-          if (downstreamPullWaiting) {
-            downstreamPullWaiting = false
-            pull(in)
+        def setIdleHandlers(): Unit = {
+          if (completionDeferred) {
+            completeStage()
+          } else {
+            setHandler(in, this)
+            setHandler(out, this)
+            if (downstreamPullWaiting) {
+              downstreamPullWaiting = false
+              pull(in)
+            }
           }
         }
-      }
-
-      def createEntity(creator: EntityCreator[RequestOutput, RequestEntity]): RequestEntity =
-        creator match {
-          case StrictEntityCreator(entity)    => entity
-          case StreamedEntityCreator(creator) => streamRequestEntity(creator)
-        }
 
-      def streamRequestEntity(creator: (Source[ParserOutput.RequestOutput, NotUsed]) => RequestEntity): RequestEntity = {
-        // stream incoming chunks into the request entity until we reach the end of it
-        // and then toggle back to "idle"
-
-        entitySource = new SubSourceOutlet[RequestOutput]("EntitySource")
-        // optimization: re-use the idle outHandler
-        entitySource.setHandler(this)
-
-        // optimization: handlers are combined to reduce allocations
-        val chunkedRequestHandler = new InHandler with OutHandler {
-          def onPush(): Unit = {
-            grab(in) match {
-              case MessageEnd =>
-                entitySource.complete()
-                entitySource = null
-                setIdleHandlers()
+        def createEntity(creator: EntityCreator[RequestOutput, RequestEntity]): RequestEntity =
+          creator match {
+            case StrictEntityCreator(entity)    => entity
+            case StreamedEntityCreator(creator) => streamRequestEntity(creator)
+          }
 
-              case x => entitySource.push(x)
+        def streamRequestEntity(
+            creator: (Source[ParserOutput.RequestOutput, NotUsed]) => RequestEntity): RequestEntity = {
+          // stream incoming chunks into the request entity until we reach the end of it
+          // and then toggle back to "idle"
+
+          entitySource = new SubSourceOutlet[RequestOutput]("EntitySource")
+          // optimization: re-use the idle outHandler
+          entitySource.setHandler(this)
+
+          // optimization: handlers are combined to reduce allocations
+          val chunkedRequestHandler = new InHandler with OutHandler {
+            def onPush(): Unit = {
+              grab(in) match {
+                case MessageEnd =>
+                  entitySource.complete()
+                  entitySource = null
+                  setIdleHandlers()
+
+                case x => entitySource.push(x)
+              }
+            }
+            override def onUpstreamFinish(): Unit = {
+              entitySource.complete()
+              completeStage()
+            }
+            override def onUpstreamFailure(ex: Throwable): Unit = {
+              entitySource.fail(ex)
+              failStage(ex)
+            }
+            override def onPull(): Unit = {
+              // remember this until we are done with the chunked entity
+              // so can pull downstream then
+              downstreamPullWaiting = true
+            }
+            override def onDownstreamFinish(): Unit = {
+              // downstream signalled not wanting any more requests
+              // we should keep processing the entity stream and then
+              // when it completes complete the stage
+              completionDeferred = true
             }
           }
-          override def onUpstreamFinish(): Unit = {
-            entitySource.complete()
-            completeStage()
-          }
-          override def onUpstreamFailure(ex: Throwable): Unit = {
-            entitySource.fail(ex)
-            failStage(ex)
-          }
-          override def onPull(): Unit = {
-            // remember this until we are done with the chunked entity
-            // so can pull downstream then
-            downstreamPullWaiting = true
-          }
-          override def onDownstreamFinish(): Unit = {
-            // downstream signalled not wanting any more requests
-            // we should keep processing the entity stream and then
-            // when it completes complete the stage
-            completionDeferred = true
-          }
+
+          setHandler(in, chunkedRequestHandler)
+          setHandler(out, chunkedRequestHandler)
+          creator(Source.fromGraph(entitySource.source))
         }
 
-        setHandler(in, chunkedRequestHandler)
-        setHandler(out, chunkedRequestHandler)
-        creator(Source.fromGraph(entitySource.source))
       }
-
-    }
   }
 
-  def parsing(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean): Flow[SessionBytes, RequestOutput, NotUsed] = {
+  def parsing(settings: ServerSettings, log: LoggingAdapter, isSecureConnection: Boolean)
+      : Flow[SessionBytes, RequestOutput, NotUsed] = {
     import settings._
 
     // the initial header parser we initially use for every connection,
     // will not be mutated, all "shared copy" parsers copy on first-write into the header cache
-    val rootParser = new HttpRequestParser(parserSettings, websocketSettings, rawRequestUriHeader, HttpHeaderParser(parserSettings, log))
+    val rootParser = new HttpRequestParser(parserSettings, websocketSettings, rawRequestUriHeader,
+      HttpHeaderParser(parserSettings, log))
 
     def establishAbsoluteUri(requestOutput: RequestOutput): RequestOutput = requestOutput match {
       case connect: RequestStart if connect.method == HttpMethods.CONNECT =>
-        MessageStartError(StatusCodes.BadRequest, ErrorInfo(s"CONNECT requests are not supported", s"Rejecting CONNECT request to '${connect.uri}'"))
+        MessageStartError(StatusCodes.BadRequest,
+          ErrorInfo(s"CONNECT requests are not supported", s"Rejecting CONNECT request to '${connect.uri}'"))
       case start: RequestStart =>
         try {
           val effectiveUri = HttpRequest.effectiveUri(start.uri, start.headers, isSecureConnection, defaultHostHeader)
@@ -251,17 +268,19 @@ private[http] object HttpServerBluePrint {
     Flow[SessionBytes].via(rootParser).map(establishAbsoluteUri)
   }
 
-  def rendering(settings: ServerSettings, log: LoggingAdapter, dateHeaderRendering: DateHeaderRendering): Flow[ResponseRenderingContext, ResponseRenderingOutput, NotUsed] = {
+  def rendering(settings: ServerSettings, log: LoggingAdapter, dateHeaderRendering: DateHeaderRendering)
+      : Flow[ResponseRenderingContext, ResponseRenderingOutput, NotUsed] = {
     import settings._
 
-    val responseRendererFactory = new HttpResponseRendererFactory(serverHeader, responseHeaderSizeHint, log, dateHeaderRendering)
+    val responseRendererFactory =
+      new HttpResponseRendererFactory(serverHeader, responseHeaderSizeHint, log, dateHeaderRendering)
 
     Flow[ResponseRenderingContext]
       .via(responseRendererFactory.renderer.named("renderer"))
   }
 
   class RequestTimeoutSupport(initialTimeout: Duration, log: LoggingAdapter)
-    extends GraphStage[BidiShape[HttpRequest, HttpRequest, HttpResponse, HttpResponse]] {
+      extends GraphStage[BidiShape[HttpRequest, HttpRequest, HttpResponse, HttpResponse]] {
     private val requestIn = Inlet[HttpRequest]("RequestTimeoutSupport.requestIn")
     private val requestOut = Outlet[HttpRequest]("RequestTimeoutSupport.requestOut")
     private val responseIn = Inlet[HttpResponse]("RequestTimeoutSupport.responseIn")
@@ -281,44 +300,48 @@ private[http] object HttpServerBluePrint {
             emit(responseOut, response, () => completeStage())
           }
       }
-      setHandler(requestIn, new InHandler {
-        def onPush(): Unit = {
-          val request = grab(requestIn)
-          val (entity, requestEnd) = HttpEntity.captureTermination(request.entity)
-          val access = new TimeoutAccessImpl(request, initialTimeout, requestEnd, callback,
-            interpreter.materializer, log)
-          openTimeouts = openTimeouts.enqueue(access)
-          push(requestOut, request.addHeader(`Timeout-Access`(access)).withEntity(entity))
-        }
-        override def onUpstreamFinish() = complete(requestOut)
-        override def onUpstreamFailure(ex: Throwable) = fail(requestOut, ex)
-      })
+      setHandler(requestIn,
+        new InHandler {
+          def onPush(): Unit = {
+            val request = grab(requestIn)
+            val (entity, requestEnd) = HttpEntity.captureTermination(request.entity)
+            val access = new TimeoutAccessImpl(request, initialTimeout, requestEnd, callback,
+              interpreter.materializer, log)
+            openTimeouts = openTimeouts.enqueue(access)
+            push(requestOut, request.addHeader(`Timeout-Access`(access)).withEntity(entity))
+          }
+          override def onUpstreamFinish() = complete(requestOut)
+          override def onUpstreamFailure(ex: Throwable) = fail(requestOut, ex)
+        })
       // TODO: provide and use default impl for simply connecting an input and an output port as we do here
-      setHandler(requestOut, new OutHandler {
-        def onPull(): Unit = pull(requestIn)
-        override def onDownstreamFinish() = cancel(requestIn)
-      })
-      setHandler(responseIn, new InHandler {
-        def onPush(): Unit = {
-          openTimeouts.head.clear()
-          openTimeouts = openTimeouts.tail
-          push(responseOut, grab(responseIn))
-        }
-        override def onUpstreamFinish() = complete(responseOut)
-        override def onUpstreamFailure(ex: Throwable) = fail(responseOut, ex)
-      })
-      setHandler(responseOut, new OutHandler {
-        def onPull(): Unit = pull(responseIn)
-        override def onDownstreamFinish() = cancel(responseIn)
-      })
+      setHandler(requestOut,
+        new OutHandler {
+          def onPull(): Unit = pull(requestIn)
+          override def onDownstreamFinish() = cancel(requestIn)
+        })
+      setHandler(responseIn,
+        new InHandler {
+          def onPush(): Unit = {
+            openTimeouts.head.clear()
+            openTimeouts = openTimeouts.tail
+            push(responseOut, grab(responseIn))
+          }
+          override def onUpstreamFinish() = complete(responseOut)
+          override def onUpstreamFailure(ex: Throwable) = fail(responseOut, ex)
+        })
+      setHandler(responseOut,
+        new OutHandler {
+          def onPull(): Unit = pull(responseIn)
+          override def onDownstreamFinish() = cancel(responseIn)
+        })
     }
   }
 
   private class TimeoutSetup(
-    val timeoutBase:   Deadline,
-    val scheduledTask: Cancellable,
-    val timeout:       Duration,
-    val handler:       HttpRequest => HttpResponse)
+      val timeoutBase: Deadline,
+      val scheduledTask: Cancellable,
+      val timeout: Duration,
+      val handler: HttpRequest => HttpResponse)
 
   private object DummyCancellable extends Cancellable {
     override def isCancelled: Boolean = true
@@ -326,28 +349,29 @@ private[http] object HttpServerBluePrint {
   }
 
   private class TimeoutAccessImpl(request: HttpRequest, initialTimeout: Duration, requestEnd: Future[Unit],
-                                  trigger:      AsyncCallback[(TimeoutAccess, HttpResponse)],
-                                  materializer: Materializer, log: LoggingAdapter)
-    extends AtomicReference[Future[TimeoutSetup]] with TimeoutAccess with (HttpRequest => HttpResponse) { self =>
+      trigger: AsyncCallback[(TimeoutAccess, HttpResponse)],
+      materializer: Materializer, log: LoggingAdapter)
+      extends AtomicReference[Future[TimeoutSetup]] with TimeoutAccess with (HttpRequest => HttpResponse) { self =>
     import materializer.executionContext
 
     private var currentTimeout = initialTimeout
 
     initialTimeout match {
       case timeout: FiniteDuration => set {
-        requestEnd.fast.map(_ => new TimeoutSetup(Deadline.now, schedule(timeout, this), timeout, this))
-      }
+          requestEnd.fast.map(_ => new TimeoutSetup(Deadline.now, schedule(timeout, this), timeout, this))
+        }
       case _ => set {
-        requestEnd.fast.map(_ => new TimeoutSetup(Deadline.now, DummyCancellable, Duration.Inf, this))
-      }
+          requestEnd.fast.map(_ => new TimeoutSetup(Deadline.now, DummyCancellable, Duration.Inf, this))
+        }
     }
 
     override def apply(request: HttpRequest) = {
       log.info("Request timeout encountered for request [{}]", request.debugString)
-      //#default-request-timeout-httpresponse
-      HttpResponse(StatusCodes.ServiceUnavailable, entity = "The server was not able " +
-        "to produce a timely response to your request.\r\nPlease try again in a short while!")
-      //#default-request-timeout-httpresponse
+      // #default-request-timeout-httpresponse
+      HttpResponse(StatusCodes.ServiceUnavailable,
+        entity = "The server was not able " +
+          "to produce a timely response to your request.\r\nPlease try again in a short while!")
+      // #default-request-timeout-httpresponse
     }
 
     def clear(): Unit = // best effort timeout cancellation
@@ -375,7 +399,8 @@ private[http] object HttpServerBluePrint {
       materializer.scheduleOnce(delay, new Runnable { def run() = trigger.invoke((self, handler(request))) })
 
     import akka.http.impl.util.JavaMapping.Implicits._
-    /** JAVA API **/
+
+    /** JAVA API * */
     def update(timeout: Duration, handler: Function[model.HttpRequest, model.HttpResponse]): Unit =
       update(timeout, handler(_: HttpRequest).asScala)
     def updateHandler(handler: Function[model.HttpRequest, model.HttpResponse]): Unit =
@@ -385,7 +410,7 @@ private[http] object HttpServerBluePrint {
   }
 
   class ControllerStage(settings: ServerSettings, log: LoggingAdapter)
-    extends GraphStage[BidiShape[RequestOutput, RequestOutput, HttpResponse, ResponseRenderingContext]] {
+      extends GraphStage[BidiShape[RequestOutput, RequestOutput, HttpResponse, ResponseRenderingContext]] {
     private val requestParsingIn = Inlet[RequestOutput]("ControllerStage.requestParsingIn")
     private val requestPrepOut = Outlet[RequestOutput]("ControllerStage.requestPrepOut")
     private val httpResponseIn = Inlet[HttpResponse]("ControllerStage.httpResponseIn")
@@ -395,218 +420,233 @@ private[http] object HttpServerBluePrint {
 
     val shape = new BidiShape(requestParsingIn, requestPrepOut, httpResponseIn, responseCtxOut)
 
-    override private[akka] def createLogicAndMaterializedValue(inheritedAttributes: Attributes, outerMaterializer: Materializer) = new GraphStageLogic(shape) {
-      val parsingErrorHandler: ParsingErrorHandler = settings.parsingErrorHandlerInstance(ActorMaterializerHelper.downcast(outerMaterializer).system)
-      val pullHttpResponseIn = () => tryPull(httpResponseIn)
-      var openRequests = immutable.Queue[RequestStart]()
-      var oneHundredContinueResponsePending = false
-      var pullSuppressed = false
-      var messageEndPending = false
-
-      setHandler(requestParsingIn, new InHandler {
-        def onPush(): Unit =
-          grab(requestParsingIn) match {
-            case r: RequestStart =>
-              openRequests = openRequests.enqueue(r)
-              messageEndPending = r.createEntity.isInstanceOf[StreamedEntityCreator[_, _]]
-              val rs = if (r.expect100Continue) {
-                r.createEntity match {
-                  case StrictEntityCreator(HttpEntity.Strict(_, _)) =>
-                    // This covers two cases:
-                    // - Either: The strict entity got all its data send already, so no need to wait for more data
-                    // - Or: The strict entity contains no data (Content-Length header value was 0 or it did not exist), the client will not send any data
-                    r
-                  case _ =>
-                    oneHundredContinueResponsePending = true
-                    r.copy(createEntity = with100ContinueTrigger(r.createEntity))
-                }
-              } else r
-              push(requestPrepOut, rs)
-            case MessageEnd =>
-              messageEndPending = false
-              push(requestPrepOut, MessageEnd)
-            case MessageStartError(status, info) => finishWithIllegalRequestError(status, info)
-            case x: EntityStreamError if messageEndPending && openRequests.isEmpty =>
-              // client terminated the connection after receiving an early response to 100-continue
-              completeStage()
-            case x =>
-              push(requestPrepOut, x)
-          }
-        override def onUpstreamFinish() =
-          if (openRequests.isEmpty) completeStage()
-          else complete(requestPrepOut)
-      })
-
-      setHandler(requestPrepOut, new OutHandler {
-        def onPull(): Unit =
-          if (oneHundredContinueResponsePending) pullSuppressed = true
-          else if (!hasBeenPulled(requestParsingIn)) pull(requestParsingIn)
-        override def onDownstreamFinish(): Unit =
-          if (openRequests.isEmpty) completeStage()
-          else failStage(new IllegalStateException("User handler flow was cancelled with ongoing request") with NoStackTrace)
-      })
-
-      setHandler(httpResponseIn, new InHandler {
-        def onPush(): Unit = {
-          val requestStart = openRequests.head
-          openRequests = openRequests.tail
-
-          val response0 = grab(httpResponseIn)
-          val response =
-            if (response0.entity.isStrict) response0 // response stream cannot fail
-            else response0.mapEntity { e =>
-              val (newEntity, fut) = HttpEntity.captureTermination(e)
-              fut.onComplete {
-                case Failure(ex) =>
-                  log.error(ex, s"Response stream for [${requestStart.debugString}] failed with '${ex.getMessage}'. Aborting connection.")
-                case _ => // ignore
-              }(ExecutionContexts.sameThreadExecutionContext)
-              newEntity
-            }
-
-          val isEarlyResponse = messageEndPending && openRequests.isEmpty
-          if (isEarlyResponse && response.status.isSuccess)
-            log.warning(
-              s"Sending an 2xx 'early' response before end of request for ${requestStart.uri} received... " +
-                "Note that the connection will be closed after this response. Also, many clients will not read early responses! " +
-                "Consider only issuing this response after the request data has been completely read!")
-          val forceClose =
-            (requestStart.expect100Continue && oneHundredContinueResponsePending) ||
-              (isClosed(requestParsingIn) && openRequests.isEmpty) ||
-              isEarlyResponse
-
-          val close =
-            if (forceClose) CloseRequested.ForceClose
-            else if (requestStart.closeRequested) CloseRequested.RequestAskedForClosing
-            else CloseRequested.Unspecified
-
-          emit(responseCtxOut, ResponseRenderingContext(response, requestStart.method, requestStart.protocol, close),
-            pullHttpResponseIn)
-          if (!isClosed(requestParsingIn) && close.shouldClose && requestStart.expect100Continue) maybePullRequestParsingIn()
-        }
-        override def onUpstreamFinish() =
-          if (openRequests.isEmpty && isClosed(requestParsingIn)) completeStage()
-          else complete(responseCtxOut)
-        override def onUpstreamFailure(ex: Throwable): Unit =
-          ex match {
-            case EntityStreamException(errorInfo) =>
-              // the application has forwarded a request entity stream error to the response stream
-              finishWithIllegalRequestError(StatusCodes.BadRequest, errorInfo)
-
-            case EntityStreamSizeException(limit, contentLength) =>
-              val summary = contentLength match {
-                case Some(cl) => s"Request Content-Length of $cl bytes exceeds the configured limit of $limit bytes"
-                case None     => s"Aggregated data length of request entity exceeds the configured limit of $limit bytes"
+    override private[akka] def createLogicAndMaterializedValue(inheritedAttributes: Attributes,
+        outerMaterializer: Materializer) =
+      new GraphStageLogic(shape) {
+        val parsingErrorHandler: ParsingErrorHandler =
+          settings.parsingErrorHandlerInstance(ActorMaterializerHelper.downcast(outerMaterializer).system)
+        val pullHttpResponseIn = () => tryPull(httpResponseIn)
+        var openRequests = immutable.Queue[RequestStart]()
+        var oneHundredContinueResponsePending = false
+        var pullSuppressed = false
+        var messageEndPending = false
+
+        setHandler(requestParsingIn,
+          new InHandler {
+            def onPush(): Unit =
+              grab(requestParsingIn) match {
+                case r: RequestStart =>
+                  openRequests = openRequests.enqueue(r)
+                  messageEndPending = r.createEntity.isInstanceOf[StreamedEntityCreator[_, _]]
+                  val rs = if (r.expect100Continue) {
+                    r.createEntity match {
+                      case StrictEntityCreator(HttpEntity.Strict(_, _)) =>
+                        // This covers two cases:
+                        // - Either: The strict entity got all its data send already, so no need to wait for more data
+                        // - Or: The strict entity contains no data (Content-Length header value was 0 or it did not exist), the client will not send any data
+                        r
+                      case _ =>
+                        oneHundredContinueResponsePending = true
+                        r.copy(createEntity = with100ContinueTrigger(r.createEntity))
+                    }
+                  } else r
+                  push(requestPrepOut, rs)
+                case MessageEnd =>
+                  messageEndPending = false
+                  push(requestPrepOut, MessageEnd)
+                case MessageStartError(status, info)                                   => finishWithIllegalRequestError(status, info)
+                case x: EntityStreamError if messageEndPending && openRequests.isEmpty =>
+                  // client terminated the connection after receiving an early response to 100-continue
+                  completeStage()
+                case x =>
+                  push(requestPrepOut, x)
               }
-              val info = ErrorInfo(summary, "Consider increasing the value of akka.http.server.parsing.max-content-length")
-              finishWithIllegalRequestError(StatusCodes.PayloadTooLarge, info)
+            override def onUpstreamFinish() =
+              if (openRequests.isEmpty) completeStage()
+              else complete(requestPrepOut)
+          })
 
-            case IllegalUriException(errorInfo) =>
-              finishWithIllegalRequestError(StatusCodes.BadRequest, errorInfo)
+        setHandler(requestPrepOut,
+          new OutHandler {
+            def onPull(): Unit =
+              if (oneHundredContinueResponsePending) pullSuppressed = true
+              else if (!hasBeenPulled(requestParsingIn)) pull(requestParsingIn)
+            override def onDownstreamFinish(): Unit =
+              if (openRequests.isEmpty) completeStage()
+              else failStage(
+                new IllegalStateException("User handler flow was cancelled with ongoing request") with NoStackTrace)
+          })
 
-            case ex: ServerTerminationDeadlineReached => failStage(ex)
+        setHandler(httpResponseIn,
+          new InHandler {
+            def onPush(): Unit = {
+              val requestStart = openRequests.head
+              openRequests = openRequests.tail
+
+              val response0 = grab(httpResponseIn)
+              val response =
+                if (response0.entity.isStrict) response0 // response stream cannot fail
+                else response0.mapEntity { e =>
+                  val (newEntity, fut) = HttpEntity.captureTermination(e)
+                  fut.onComplete {
+                    case Failure(ex) =>
+                      log.error(ex,
+                        s"Response stream for [${requestStart.debugString}] failed with '${ex.getMessage}'. Aborting connection.")
+                    case _ => // ignore
+                  }(ExecutionContexts.sameThreadExecutionContext)
+                  newEntity
+                }
 
-            case NonFatal(e) =>
-              log.error(e, "Internal server error, sending 500 response")
-              emitErrorResponse(HttpResponse(StatusCodes.InternalServerError))
-          }
-      })
+              val isEarlyResponse = messageEndPending && openRequests.isEmpty
+              if (isEarlyResponse && response.status.isSuccess)
+                log.warning(
+                  s"Sending an 2xx 'early' response before end of request for ${requestStart.uri} received... " +
+                  "Note that the connection will be closed after this response. Also, many clients will not read early responses! " +
+                  "Consider only issuing this response after the request data has been completely read!")
+              val forceClose =
+                (requestStart.expect100Continue && oneHundredContinueResponsePending) ||
+                (isClosed(requestParsingIn) && openRequests.isEmpty) ||
+                isEarlyResponse
+
+              val close =
+                if (forceClose) CloseRequested.ForceClose
+                else if (requestStart.closeRequested) CloseRequested.RequestAskedForClosing
+                else CloseRequested.Unspecified
+
+              emit(responseCtxOut,
+                ResponseRenderingContext(response, requestStart.method, requestStart.protocol, close),
+                pullHttpResponseIn)
+              if (!isClosed(requestParsingIn) && close.shouldClose && requestStart.expect100Continue)
+                maybePullRequestParsingIn()
+            }
+            override def onUpstreamFinish() =
+              if (openRequests.isEmpty && isClosed(requestParsingIn)) completeStage()
+              else complete(responseCtxOut)
+            override def onUpstreamFailure(ex: Throwable): Unit =
+              ex match {
+                case EntityStreamException(errorInfo) =>
+                  // the application has forwarded a request entity stream error to the response stream
+                  finishWithIllegalRequestError(StatusCodes.BadRequest, errorInfo)
+
+                case EntityStreamSizeException(limit, contentLength) =>
+                  val summary = contentLength match {
+                    case Some(cl) => s"Request Content-Length of $cl bytes exceeds the configured limit of $limit bytes"
+                    case None =>
+                      s"Aggregated data length of request entity exceeds the configured limit of $limit bytes"
+                  }
+                  val info =
+                    ErrorInfo(summary, "Consider increasing the value of akka.http.server.parsing.max-content-length")
+                  finishWithIllegalRequestError(StatusCodes.PayloadTooLarge, info)
+
+                case IllegalUriException(errorInfo) =>
+                  finishWithIllegalRequestError(StatusCodes.BadRequest, errorInfo)
+
+                case ex: ServerTerminationDeadlineReached => failStage(ex)
+
+                case NonFatal(e) =>
+                  log.error(e, "Internal server error, sending 500 response")
+                  emitErrorResponse(HttpResponse(StatusCodes.InternalServerError))
+              }
+          })
 
-      setHandler(responseCtxOut, new OutHandler {
-        override def onPull() = {
-          pull(httpResponseIn)
-          // after the initial pull here we only ever pull after having emitted in `onPush` of `httpResponseIn`
-          setHandler(responseCtxOut, GraphStageLogic.EagerTerminateOutput)
-        }
-      })
+        setHandler(responseCtxOut,
+          new OutHandler {
+            override def onPull() = {
+              pull(httpResponseIn)
+              // after the initial pull here we only ever pull after having emitted in `onPush` of `httpResponseIn`
+              setHandler(responseCtxOut, GraphStageLogic.EagerTerminateOutput)
+            }
+          })
 
-      def finishWithIllegalRequestError(status: StatusCode, info: ErrorInfo): Unit = {
-        val errorResponse = JavaMapping.toScala(parsingErrorHandler.handle(status, info, log, settings))
-        emitErrorResponse(errorResponse)
-      }
+        def finishWithIllegalRequestError(status: StatusCode, info: ErrorInfo): Unit = {
+          val errorResponse = JavaMapping.toScala(parsingErrorHandler.handle(status, info, log, settings))
+          emitErrorResponse(errorResponse)
+        }
 
-      def emitErrorResponse(response: HttpResponse): Unit =
-        emit(responseCtxOut, ResponseRenderingContext(response, closeRequested = CloseRequested.ForceClose), () => completeStage())
+        def emitErrorResponse(response: HttpResponse): Unit =
+          emit(responseCtxOut, ResponseRenderingContext(response, closeRequested = CloseRequested.ForceClose),
+            () => completeStage())
 
-      def maybePullRequestParsingIn(): Unit =
-        if (pullSuppressed) {
-          pullSuppressed = false
-          pull(requestParsingIn)
-        }
+        def maybePullRequestParsingIn(): Unit =
+          if (pullSuppressed) {
+            pullSuppressed = false
+            pull(requestParsingIn)
+          }
 
-      /**
-       * The `Expect: 100-continue` header has a special status in HTTP.
-       * It allows the client to send an `Expect: 100-continue` header with the request and then pause request sending
-       * (i.e. hold back sending the request entity). The server reads the request headers, determines whether it wants to
-       * accept the request and responds with
-       *
-       * - `417 Expectation Failed`, if it doesn't support the `100-continue` expectation
-       * (or if the `Expect` header contains other, unsupported expectations).
-       * - a `100 Continue` response,
-       * if it is ready to accept the request entity and the client should go ahead with sending it
-       * - a final response (like a 4xx to signal some client-side error
-       * (e.g. if the request entity length is beyond the configured limit) or a 3xx redirect)
-       *
-       * Only if the client receives a `100 Continue` response from the server is it allowed to continue sending the request
-       * entity. In this case it will receive another response after having completed request sending.
-       * So this special feature breaks the normal "one request - one response" logic of HTTP!
-       * It therefore requires special handling in all HTTP stacks (client- and server-side).
-       *
-       * For us this means:
-       *
-       * - on the server-side:
-       * After having read a `Expect: 100-continue` header with the request we package up an `HttpRequest` instance and send
-       * it through to the application. Only when (and if) the application then requests data from the entity stream do we
-       * send out a `100 Continue` response and continue reading the request entity.
-       * The application can therefore determine itself whether it wants the client to send the request entity
-       * by deciding whether to look at the request entity data stream or not.
-       * If the application sends a response *without* having looked at the request entity the client receives this
-       * response *instead of* the `100 Continue` response and the server closes the connection afterwards.
-       *
-       * - on the client-side:
-       * If the user adds a `Expect: 100-continue` header to the request we need to hold back sending the entity until
-       * we've received a `100 Continue` response.
-       */
-      val emit100ContinueResponse =
-        getAsyncCallback[Unit] { _ =>
-          oneHundredContinueResponsePending = false
-          emit(responseCtxOut, ResponseRenderingContext(HttpResponse(StatusCodes.Continue)))
-          maybePullRequestParsingIn()
-        }
+        /**
+         * The `Expect: 100-continue` header has a special status in HTTP.
+         * It allows the client to send an `Expect: 100-continue` header with the request and then pause request sending
+         * (i.e. hold back sending the request entity). The server reads the request headers, determines whether it wants to
+         * accept the request and responds with
+         *
+         * - `417 Expectation Failed`, if it doesn't support the `100-continue` expectation
+         * (or if the `Expect` header contains other, unsupported expectations).
+         * - a `100 Continue` response,
+         * if it is ready to accept the request entity and the client should go ahead with sending it
+         * - a final response (like a 4xx to signal some client-side error
+         * (e.g. if the request entity length is beyond the configured limit) or a 3xx redirect)
+         *
+         * Only if the client receives a `100 Continue` response from the server is it allowed to continue sending the request
+         * entity. In this case it will receive another response after having completed request sending.
+         * So this special feature breaks the normal "one request - one response" logic of HTTP!
+         * It therefore requires special handling in all HTTP stacks (client- and server-side).
+         *
+         * For us this means:
+         *
+         * - on the server-side:
+         * After having read a `Expect: 100-continue` header with the request we package up an `HttpRequest` instance and send
+         * it through to the application. Only when (and if) the application then requests data from the entity stream do we
+         * send out a `100 Continue` response and continue reading the request entity.
+         * The application can therefore determine itself whether it wants the client to send the request entity
+         * by deciding whether to look at the request entity data stream or not.
+         * If the application sends a response *without* having looked at the request entity the client receives this
+         * response *instead of* the `100 Continue` response and the server closes the connection afterwards.
+         *
+         * - on the client-side:
+         * If the user adds a `Expect: 100-continue` header to the request we need to hold back sending the entity until
+         * we've received a `100 Continue` response.
+         */
+        val emit100ContinueResponse =
+          getAsyncCallback[Unit] { _ =>
+            oneHundredContinueResponsePending = false
+            emit(responseCtxOut, ResponseRenderingContext(HttpResponse(StatusCodes.Continue)))
+            maybePullRequestParsingIn()
+          }
 
-      case object OneHundredContinueStage extends GraphStage[FlowShape[ParserOutput, ParserOutput]] {
-        val in: Inlet[ParserOutput] = Inlet("OneHundredContinueStage.in")
-        val out: Outlet[ParserOutput] = Outlet("OneHundredContinueStage.out")
-        override val shape: FlowShape[ParserOutput, ParserOutput] = FlowShape(in, out)
+        case object OneHundredContinueStage extends GraphStage[FlowShape[ParserOutput, ParserOutput]] {
+          val in: Inlet[ParserOutput] = Inlet("OneHundredContinueStage.in")
+          val out: Outlet[ParserOutput] = Outlet("OneHundredContinueStage.out")
+          override val shape: FlowShape[ParserOutput, ParserOutput] = FlowShape(in, out)
 
-        override def initialAttributes = Attributes.name("expect100continueTrigger")
+          override def initialAttributes = Attributes.name("expect100continueTrigger")
 
-        override def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
-          new GraphStageLogic(shape) with InHandler with OutHandler {
-            private var oneHundredContinueSent = false
+          override def createLogic(inheritedAttributes: Attributes): GraphStageLogic =
+            new GraphStageLogic(shape) with InHandler with OutHandler {
+              private var oneHundredContinueSent = false
 
-            override def onPush(): Unit = push(out, grab(in))
-            override def onPull(): Unit = {
-              if (!oneHundredContinueSent) {
-                oneHundredContinueSent = true
-                emit100ContinueResponse.invoke(())
+              override def onPush(): Unit = push(out, grab(in))
+              override def onPull(): Unit = {
+                if (!oneHundredContinueSent) {
+                  oneHundredContinueSent = true
+                  emit100ContinueResponse.invoke(())
+                }
+                pull(in)
               }
-              pull(in)
-            }
 
-            setHandlers(in, out, this)
-          }
-      }
+              setHandlers(in, out, this)
+            }
+        }
 
-      def with100ContinueTrigger[T <: ParserOutput](createEntity: EntityCreator[T, RequestEntity]) =
-        StreamedEntityCreator {
-          createEntity.compose[Source[T, NotUsed]] {
-            _.via(OneHundredContinueStage.asInstanceOf[GraphStage[FlowShape[T, T]]])
+        def with100ContinueTrigger[T <: ParserOutput](createEntity: EntityCreator[T, RequestEntity]) =
+          StreamedEntityCreator {
+            createEntity.compose[Source[T, NotUsed]] {
+              _.via(OneHundredContinueStage.asInstanceOf[GraphStage[FlowShape[T, T]]])
+            }
           }
-        }
-    } -> NotUsed
+      } -> NotUsed
 
-    def createLogic(effectiveAttributes: Attributes): GraphStageLogic = throw new IllegalStateException("unexpected invocation")
+    def createLogic(effectiveAttributes: Attributes): GraphStageLogic =
+      throw new IllegalStateException("unexpected invocation")
   }
 
   /**
@@ -618,7 +658,7 @@ private[http] object HttpServerBluePrint {
     One2OneBidiFlow[HttpRequest, HttpResponse](pipeliningLimit).reversed
 
   private class ProtocolSwitchStage(settings: ServerSettings, log: LoggingAdapter)
-    extends GraphStage[BidiShape[ResponseRenderingOutput, ByteString, SessionBytes, SessionBytes]] {
+      extends GraphStage[BidiShape[ResponseRenderingOutput, ByteString, SessionBytes, SessionBytes]] {
 
     private val fromNet = Inlet[SessionBytes]("ProtocolSwitchStage.fromNet")
     private val toNet = Outlet[ByteString]("ProtocolSwitchStage.toNet")
@@ -638,33 +678,37 @@ private[http] object HttpServerBluePrint {
        * are replaced.
        */
 
-      setHandler(fromHttp, new InHandler {
-        override def onPush(): Unit =
-          grab(fromHttp) match {
-            case HttpData(b) => push(toNet, b)
-            case SwitchToOtherProtocol(bytes, handlerFlow) =>
-              push(toNet, bytes)
-              complete(toHttp)
-              cancel(fromHttp)
-              switchToOtherProtocol(handlerFlow)
-          }
-        override def onUpstreamFinish(): Unit = complete(toNet)
-        override def onUpstreamFailure(ex: Throwable): Unit = fail(toNet, ex)
-      })
-      setHandler(toNet, new OutHandler {
-        override def onPull(): Unit = pull(fromHttp)
-        override def onDownstreamFinish(): Unit = completeStage()
-      })
-
-      setHandler(fromNet, new InHandler {
-        override def onPush(): Unit = push(toHttp, grab(fromNet))
-        override def onUpstreamFinish(): Unit = complete(toHttp)
-        override def onUpstreamFailure(ex: Throwable): Unit = fail(toHttp, ex)
-      })
-      setHandler(toHttp, new OutHandler {
-        override def onPull(): Unit = pull(fromNet)
-        override def onDownstreamFinish(): Unit = cancel(fromNet)
-      })
+      setHandler(fromHttp,
+        new InHandler {
+          override def onPush(): Unit =
+            grab(fromHttp) match {
+              case HttpData(b) => push(toNet, b)
+              case SwitchToOtherProtocol(bytes, handlerFlow) =>
+                push(toNet, bytes)
+                complete(toHttp)
+                cancel(fromHttp)
+                switchToOtherProtocol(handlerFlow)
+            }
+          override def onUpstreamFinish(): Unit = complete(toNet)
+          override def onUpstreamFailure(ex: Throwable): Unit = fail(toNet, ex)
+        })
+      setHandler(toNet,
+        new OutHandler {
+          override def onPull(): Unit = pull(fromHttp)
+          override def onDownstreamFinish(): Unit = completeStage()
+        })
+
+      setHandler(fromNet,
+        new InHandler {
+          override def onPush(): Unit = push(toHttp, grab(fromNet))
+          override def onUpstreamFinish(): Unit = complete(toHttp)
+          override def onUpstreamFailure(ex: Throwable): Unit = fail(toHttp, ex)
+        })
+      setHandler(toHttp,
+        new OutHandler {
+          override def onPull(): Unit = pull(fromNet)
+          override def onDownstreamFinish(): Unit = cancel(fromNet)
+        })
 
       private var activeTimers = 0
       private def timeout = ActorMaterializerHelper.downcast(materializer).settings.subscriptionTimeoutSettings.timeout
@@ -696,13 +740,14 @@ private[http] object HttpServerBluePrint {
         })
 
         if (isClosed(fromNet)) {
-          setHandler(toNet, new OutHandler {
-            override def onPull(): Unit = sinkIn.pull()
-            override def onDownstreamFinish(): Unit = {
-              completeStage()
-              sinkIn.cancel()
-            }
-          })
+          setHandler(toNet,
+            new OutHandler {
+              override def onPull(): Unit = sinkIn.pull()
+              override def onDownstreamFinish(): Unit = {
+                completeStage()
+                sinkIn.cancel()
+              }
+            })
           newFlow.runWith(Source.empty, sinkIn.sink)(subFusingMaterializer)
         } else {
           val sourceOut = new SubSourceOutlet[ByteString]("FrameSource")
@@ -713,24 +758,26 @@ private[http] object HttpServerBluePrint {
           })
           addTimeout(timeoutKey)
 
-          setHandler(toNet, new OutHandler {
-            override def onPull(): Unit = sinkIn.pull()
-            override def onDownstreamFinish(): Unit = {
-              completeStage()
-              sinkIn.cancel()
-              sourceOut.complete()
-            }
-          })
+          setHandler(toNet,
+            new OutHandler {
+              override def onPull(): Unit = sinkIn.pull()
+              override def onDownstreamFinish(): Unit = {
+                completeStage()
+                sinkIn.cancel()
+                sourceOut.complete()
+              }
+            })
 
-          setHandler(fromNet, new InHandler {
-            override def onPush(): Unit = {
-              if (sourceOut.isAvailable) {
-                sourceOut.push(grab(fromNet).bytes)
+          setHandler(fromNet,
+            new InHandler {
+              override def onPush(): Unit = {
+                if (sourceOut.isAvailable) {
+                  sourceOut.push(grab(fromNet).bytes)
+                }
               }
-            }
-            override def onUpstreamFinish(): Unit = sourceOut.complete()
-            override def onUpstreamFailure(ex: Throwable): Unit = sourceOut.fail(ex)
-          })
+              override def onUpstreamFinish(): Unit = sourceOut.complete()
+              override def onUpstreamFailure(ex: Throwable): Unit = sourceOut.fail(ex)
+            })
           sourceOut.setHandler(new OutHandler {
             override def onPull(): Unit = {
               // This check only needed on the first pull due to potential element
@@ -751,13 +798,14 @@ private[http] object HttpServerBluePrint {
           })
 
           // disable the old handlers, at this point we might still get something due to cancellation delay which we need to ignore
-          setHandlers(fromHttp, toHttp, new InHandler with OutHandler {
-            override def onPush(): Unit = ()
-            override def onPull(): Unit = ()
-            override def onUpstreamFinish(): Unit = ()
-            override def onUpstreamFailure(ex: Throwable): Unit = ()
-            override def onDownstreamFinish(): Unit = ()
-          })
+          setHandlers(fromHttp, toHttp,
+            new InHandler with OutHandler {
+              override def onPush(): Unit = ()
+              override def onPull(): Unit = ()
+              override def onUpstreamFinish(): Unit = ()
+              override def onUpstreamFailure(ex: Throwable): Unit = ()
+              override def onDownstreamFinish(): Unit = ()
+            })
 
           newFlow.runWith(sourceOut.source, sinkIn.sink)(subFusingMaterializer)
         }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/server/ServerTerminator.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/server/ServerTerminator.scala
index 33f74ccec..2d45bd31c 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/server/ServerTerminator.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/server/ServerTerminator.scala
@@ -30,6 +30,7 @@ import scala.util.{ Failure, Success }
 // "Hasta la vista, baby."
 @InternalApi
 private[http] trait ServerTerminator {
+
   /**
    * Initiate the termination sequence of this server.
    */
@@ -52,7 +53,8 @@ private[http] final class MasterServerTerminator(log: LoggingAdapter) extends Se
   // if we get a termination signal from the outer (user-land) it has to send this signal to all existing connection
   // terminators such that they can initiate the termination (draining, failing) of their respective connections.
 
-  private val terminators = new AtomicReference[MasterServerTerminator.State](MasterServerTerminator.AliveConnectionTerminators(Set.empty))
+  private val terminators =
+    new AtomicReference[MasterServerTerminator.State](MasterServerTerminator.AliveConnectionTerminators(Set.empty))
   private val termination = Promise[HttpTerminated]()
 
   /**
@@ -67,7 +69,7 @@ private[http] final class MasterServerTerminator(log: LoggingAdapter) extends Se
     terminators.get() match {
       case v @ AliveConnectionTerminators(ts) =>
         terminators.compareAndSet(v, v.copy(ts = ts + terminator)) ||
-          registerConnection(terminator) // retry
+        registerConnection(terminator) // retry
 
       case Terminating(deadline) =>
         terminator.terminate(deadline.timeLeft)(ec)
@@ -126,7 +128,9 @@ private[http] final class MasterServerTerminator(log: LoggingAdapter) extends Se
         } else terminate(timeout)(ex) // retry
 
       case Terminating(existingDeadline) =>
-        log.warning(s"Issued terminate($timeout) while termination is in progress already (with deadline: time left: ${PrettyDuration.format(existingDeadline.timeLeft)}")
+        log.warning(
+          s"Issued terminate($timeout) while termination is in progress already (with deadline: time left: ${PrettyDuration.format(
+              existingDeadline.timeLeft)}")
         termination.future
     }
   }
@@ -140,10 +144,12 @@ private[http] final class MasterServerTerminator(log: LoggingAdapter) extends Se
  */
 @InternalApi
 private[http] final class ServerTerminationDeadlineReached()
-  extends RuntimeException("Server termination deadline reached, shutting down all connections and terminating server...")
+    extends RuntimeException(
+      "Server termination deadline reached, shutting down all connections and terminating server...")
 
 object GracefulTerminatorStage {
-  def apply(system: ActorSystem, serverSettings: ServerSettings): BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, ServerTerminator] = {
+  def apply(system: ActorSystem, serverSettings: ServerSettings)
+      : BidiFlow[HttpResponse, HttpResponse, HttpRequest, HttpRequest, ServerTerminator] = {
     val stage = new GracefulTerminatorStage(serverSettings)
     BidiFlow.fromGraph(stage)
   }
@@ -164,7 +170,8 @@ object GracefulTerminatorStage {
  */
 @InternalApi
 private[http] final class GracefulTerminatorStage(settings: ServerSettings)
-  extends GraphStageWithMaterializedValue[BidiShape[HttpResponse, HttpResponse, HttpRequest, HttpRequest], ServerTerminator] {
+    extends GraphStageWithMaterializedValue[BidiShape[HttpResponse, HttpResponse, HttpRequest, HttpRequest],
+      ServerTerminator] {
 
   val fromNet: Inlet[HttpRequest] = Inlet("netIn")
   val toUser: Outlet[HttpRequest] = Outlet("userOut")
@@ -175,7 +182,8 @@ private[http] final class GracefulTerminatorStage(settings: ServerSettings)
 
   final val TerminationDeadlineTimerKey = "TerminationDeadlineTimerKey"
 
-  final class ConnectionTerminator(triggerTermination: Promise[FiniteDuration => Future[HttpTerminated]]) extends ServerTerminator {
+  final class ConnectionTerminator(
+      triggerTermination: Promise[FiniteDuration => Future[HttpTerminated]]) extends ServerTerminator {
     override def terminate(deadline: FiniteDuration)(implicit ec: ExecutionContext): Future[HttpTerminated] = {
       triggerTermination.future.flatMap(callback => {
         callback(deadline)
@@ -222,101 +230,112 @@ private[http] final class GracefulTerminatorStage(settings: ServerSettings)
         }
       }
 
-      setHandler(fromUser, new InHandler {
-        override def onPush(): Unit = {
-          val response = grab(fromUser)
-          pendingUserHandlerResponse = false
-          push(toNet, response)
-        }
-
-        override def onUpstreamFinish(): Unit = {
-          // don't finish the whole bidi stage, just propagate the completion:
-          complete(toNet)
-        }
-      })
-      setHandler(toUser, new OutHandler {
-        override def onPull(): Unit = {
-          pull(fromNet)
-        }
-      })
-      setHandler(fromNet, new InHandler {
-        override def onPush(): Unit = {
-          val request = grab(fromNet)
-
-          pendingUserHandlerResponse = true
-          push(toUser, request)
-        }
-
-        override def onUpstreamFinish(): Unit = {
-          // don't finish the whole bidi stage, just propagate the completion:
-          complete(toUser)
-        }
-      })
-      setHandler(toNet, new OutHandler {
-        override def onPull(): Unit = {
-          pull(fromUser)
-        }
-      })
-
-      def installTerminationHandlers(deadline: Deadline): Unit = {
-        // when no inflight requests, fail stage right away, could probably be a complete
-        // when https://github.com/akka/akka-http/issues/3209 is fixed
-        if (!pendingUserHandlerResponse) failStage(new ServerTerminationDeadlineReached)
-
-        setHandler(fromUser, new InHandler {
+      setHandler(fromUser,
+        new InHandler {
           override def onPush(): Unit = {
-            val overdue = deadline.isOverdue()
-            val response =
-              if (overdue) {
-                log.warning("Terminating server ({}), discarding user reply since arrived after deadline expiration", formatTimeLeft(deadline))
-                settings.terminationDeadlineExceededResponse
-              } else grab(fromUser)
-
+            val response = grab(fromUser)
             pendingUserHandlerResponse = false
+            push(toNet, response)
+          }
 
-            // send response to pending in-flight request with Connection: close, and complete stage
-            emit(toNet, response.withHeaders(Connection("close") +: response.headers.filterNot(_.is(Connection.lowercaseName))), () => completeStage())
+          override def onUpstreamFinish(): Unit = {
+            // don't finish the whole bidi stage, just propagate the completion:
+            complete(toNet)
           }
         })
-
-        // once termination deadline hits, we stop pulling from network
-        setHandler(toUser, new OutHandler {
+      setHandler(toUser,
+        new OutHandler {
           override def onPull(): Unit = {
-            // if (deadline.hasTimeLeft()) // we pull always as we want to reply errors to everyone
             pull(fromNet)
           }
         })
-
-        setHandler(fromNet, new InHandler {
+      setHandler(fromNet,
+        new InHandler {
           override def onPush(): Unit = {
             val request = grab(fromNet)
-            log.warning(
-              "Terminating server ({}), attempting to send termination reply to incoming [{} {}]",
-              formatTimeLeft(deadline), request.method, request.uri.path)
-
-            // on purpose discard all incoming bytes for requests
-            // could discard with the deadline.timeLeft completion timeout, but not necessarily needed
-            request.entity.discardBytes()(interpreter.subFusingMaterializer).future.onComplete {
-              case Success(_) => // ignore
-              case Failure(ex) =>
-                // we do want to cause this failure to fail the termination eagerly
-                failureCallback.invoke(ex)
-            }(interpreter.materializer.executionContext)
-
-            // we can reply right away with an termination response since user handler will never emit a response anymore
-            push(toNet, settings.terminationDeadlineExceededResponse.withHeaders(Connection("close")))
-            completeStage()
+
+            pendingUserHandlerResponse = true
+            push(toUser, request)
           }
-        })
 
-        // we continue pulling from user, to make sure we'd get the "final user reply" that may be sent during termination
-        setHandler(toNet, new OutHandler {
+          override def onUpstreamFinish(): Unit = {
+            // don't finish the whole bidi stage, just propagate the completion:
+            complete(toUser)
+          }
+        })
+      setHandler(toNet,
+        new OutHandler {
           override def onPull(): Unit = {
-            if (pendingUserHandlerResponse) {
-              if (isAvailable(fromUser)) pull(fromUser)
-            }
+            pull(fromUser)
           }
         })
+
+      def installTerminationHandlers(deadline: Deadline): Unit = {
+        // when no inflight requests, fail stage right away, could probably be a complete
+        // when https://github.com/akka/akka-http/issues/3209 is fixed
+        if (!pendingUserHandlerResponse) failStage(new ServerTerminationDeadlineReached)
+
+        setHandler(fromUser,
+          new InHandler {
+            override def onPush(): Unit = {
+              val overdue = deadline.isOverdue()
+              val response =
+                if (overdue) {
+                  log.warning("Terminating server ({}), discarding user reply since arrived after deadline expiration",
+                    formatTimeLeft(deadline))
+                  settings.terminationDeadlineExceededResponse
+                } else grab(fromUser)
+
+              pendingUserHandlerResponse = false
+
+              // send response to pending in-flight request with Connection: close, and complete stage
+              emit(toNet,
+                response.withHeaders(Connection("close") +: response.headers.filterNot(_.is(Connection.lowercaseName))),
+                () => completeStage())
+            }
+          })
+
+        // once termination deadline hits, we stop pulling from network
+        setHandler(toUser,
+          new OutHandler {
+            override def onPull(): Unit = {
+              // if (deadline.hasTimeLeft()) // we pull always as we want to reply errors to everyone
+              pull(fromNet)
+            }
+          })
+
+        setHandler(fromNet,
+          new InHandler {
+            override def onPush(): Unit = {
+              val request = grab(fromNet)
+              log.warning(
+                "Terminating server ({}), attempting to send termination reply to incoming [{} {}]",
+                formatTimeLeft(deadline), request.method, request.uri.path)
+
+              // on purpose discard all incoming bytes for requests
+              // could discard with the deadline.timeLeft completion timeout, but not necessarily needed
+              request.entity.discardBytes()(interpreter.subFusingMaterializer).future.onComplete {
+                case Success(_)  => // ignore
+                case Failure(ex) =>
+                  // we do want to cause this failure to fail the termination eagerly
+                  failureCallback.invoke(ex)
+              }(interpreter.materializer.executionContext)
+
+              // we can reply right away with an termination response since user handler will never emit a response anymore
+              push(toNet, settings.terminationDeadlineExceededResponse.withHeaders(Connection("close")))
+              completeStage()
+            }
+          })
+
+        // we continue pulling from user, to make sure we'd get the "final user reply" that may be sent during termination
+        setHandler(toNet,
+          new OutHandler {
+            override def onPull(): Unit = {
+              if (pendingUserHandlerResponse) {
+                if (isAvailable(fromUser)) pull(fromUser)
+              }
+            }
+          })
       }
 
       override def postStop(): Unit = {
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/server/UpgradeToOtherProtocolResponseHeader.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/server/UpgradeToOtherProtocolResponseHeader.scala
index 3df5c8c1c..5182477b1 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/server/UpgradeToOtherProtocolResponseHeader.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/server/UpgradeToOtherProtocolResponseHeader.scala
@@ -14,7 +14,7 @@ import akka.util.ByteString
  */
 @InternalApi
 private[http] final case class UpgradeToOtherProtocolResponseHeader(handler: Flow[ByteString, ByteString, Any])
-  extends InternalCustomHeader("UpgradeToOtherProtocolResponseHeader")
+    extends InternalCustomHeader("UpgradeToOtherProtocolResponseHeader")
 
 /**
  * Internal API
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEvent.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEvent.scala
index e17c82419..2ba3f2888 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEvent.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEvent.scala
@@ -44,27 +44,27 @@ private[http] final case class FrameData(data: ByteString, lastPart: Boolean) ex
 
 /** Model of the frame header */
 private[http] final case class FrameHeader(
-  opcode: Protocol.Opcode,
-  mask:   Option[Int],
-  length: Long,
-  fin:    Boolean,
-  rsv1:   Boolean         = false,
-  rsv2:   Boolean         = false,
-  rsv3:   Boolean         = false)
+    opcode: Protocol.Opcode,
+    mask: Option[Int],
+    length: Long,
+    fin: Boolean,
+    rsv1: Boolean = false,
+    rsv2: Boolean = false,
+    rsv3: Boolean = false)
 
 private[http] object FrameEvent {
   def empty(
-    opcode: Protocol.Opcode,
-    fin:    Boolean,
-    rsv1:   Boolean         = false,
-    rsv2:   Boolean         = false,
-    rsv3:   Boolean         = false): FrameStart =
+      opcode: Protocol.Opcode,
+      fin: Boolean,
+      rsv1: Boolean = false,
+      rsv2: Boolean = false,
+      rsv3: Boolean = false): FrameStart =
     fullFrame(opcode, None, ByteString.empty, fin, rsv1, rsv2, rsv3)
   def fullFrame(opcode: Protocol.Opcode, mask: Option[Int], data: ByteString,
-                fin:  Boolean,
-                rsv1: Boolean = false,
-                rsv2: Boolean = false,
-                rsv3: Boolean = false): FrameStart =
+      fin: Boolean,
+      rsv1: Boolean = false,
+      rsv2: Boolean = false,
+      rsv3: Boolean = false): FrameStart =
     FrameStart(FrameHeader(opcode, mask, data.length, fin, rsv1, rsv2, rsv3), data)
   val emptyLastContinuationFrame: FrameStart =
     empty(Protocol.Opcode.Continuation, fin = true)
@@ -72,8 +72,8 @@ private[http] object FrameEvent {
   def closeFrame(closeCode: Int, reason: String = "", mask: Option[Int] = None): FrameStart = {
     require(closeCode >= 1000, s"Invalid close code: $closeCode")
     val body = ByteString(
-      ((closeCode & 0xff00) >> 8).toByte,
-      (closeCode & 0xff).toByte) ++ ByteString(reason, "UTF8")
+      ((closeCode & 0xFF00) >> 8).toByte,
+      (closeCode & 0xFF).toByte) ++ ByteString(reason, "UTF8")
 
     fullFrame(Opcode.Close, mask, FrameEventParser.mask(body, mask), fin = true)
   }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventParser.scala
index 8c437a482..1ac706d87 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventParser.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventParser.scala
@@ -117,10 +117,10 @@ private[http] object FrameEventParser extends ByteStringParser[FrameEvent] {
     }
 
   def mask(bytes: ByteString, mask: Int): (ByteString, Int) = {
-    val m0 = ((mask >> 24) & 0xff).toByte
-    val m1 = ((mask >> 16) & 0xff).toByte
-    val m2 = ((mask >> 8) & 0xff).toByte
-    val m3 = ((mask >> 0) & 0xff).toByte
+    val m0 = ((mask >> 24) & 0xFF).toByte
+    val m1 = ((mask >> 16) & 0xFF).toByte
+    val m2 = ((mask >> 8) & 0xFF).toByte
+    val m3 = ((mask >> 0) & 0xFF).toByte
 
     @tailrec def rec(bytes: Array[Byte], offset: Int, last: Int): Unit =
       if (offset < last) {
@@ -157,7 +157,7 @@ private[http] object FrameEventParser extends ByteStringParser[FrameEvent] {
     def invalid(reason: String) = Some((Protocol.CloseCodes.ProtocolError, s"Peer sent illegal close frame ($reason)."))
 
     if (data.length >= 2) {
-      val code = ((data(0) & 0xff) << 8) | (data(1) & 0xff)
+      val code = ((data(0) & 0xFF) << 8) | (data(1) & 0xFF)
       val message = Utf8Decoder.decode(data.drop(2))
       if (!Protocol.CloseCodes.isValid(code)) invalid(s"invalid close code '$code'")
       else if (message.isFailure) invalid("close reason message is invalid UTF8")
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventRenderer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventRenderer.scala
index aed6006aa..d70242529 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventRenderer.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameEventRenderer.scala
@@ -63,9 +63,10 @@ private[http] final class FrameEventRenderer extends GraphStage[FlowShape[FrameE
       }
 
     setHandler(in, Initial)
-    setHandler(out, new OutHandler {
-      override def onPull(): Unit = pull(in)
-    })
+    setHandler(out,
+      new OutHandler {
+        override def onPull(): Unit = pull(in)
+      })
   }
 
   private def renderStart(start: FrameStart): ByteString = renderHeader(start.header) ++ start.data
@@ -88,9 +89,9 @@ private[http] final class FrameEventRenderer extends GraphStage[FlowShape[FrameE
     def bool(b: Boolean, mask: Int): Int = if (b) mask else 0
     val flags =
       bool(header.fin, FIN_MASK) |
-        bool(header.rsv1, RSV1_MASK) |
-        bool(header.rsv2, RSV2_MASK) |
-        bool(header.rsv3, RSV3_MASK)
+      bool(header.rsv1, RSV1_MASK) |
+      bool(header.rsv2, RSV2_MASK) |
+      bool(header.rsv3, RSV3_MASK)
 
     data(0) = (flags | header.opcode.code).toByte
     data(1) = (bool(header.mask.isDefined, MASK_MASK) | lengthBits).toByte
@@ -103,7 +104,7 @@ private[http] final class FrameEventRenderer extends GraphStage[FlowShape[FrameE
       case 8 =>
         @tailrec def addLongBytes(l: Long, writtenBytes: Int): Unit =
           if (writtenBytes < 8) {
-            data(2 + writtenBytes) = (l & 0xff).toByte
+            data(2 + writtenBytes) = (l & 0xFF).toByte
             addLongBytes(java.lang.Long.rotateLeft(l, 8), writtenBytes + 1)
           }
 
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala
index d0df11c29..3e398d1a9 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala
@@ -87,20 +87,22 @@ private[http] object FrameHandler {
 
           override def handleFrameData(data: FrameData): Unit = publish(data)
 
-          def publish(part: FrameEvent): Unit = try {
-            publishMessagePart(createMessagePart(part.data, last = finSeen && part.lastPart))
-          } catch {
-            case NonFatal(e) => closeWithCode(Protocol.CloseCodes.InconsistentData)
-          }
+          def publish(part: FrameEvent): Unit =
+            try {
+              publishMessagePart(createMessagePart(part.data, last = finSeen && part.lastPart))
+            } catch {
+              case NonFatal(e) => closeWithCode(Protocol.CloseCodes.InconsistentData)
+            }
         }
 
         private trait ControlFrameStartHandler extends FrameHandler {
           def handleRegularFrameStart(start: FrameStart): Unit
 
           override def handleFrameStart(start: FrameStart): Unit = start.header match {
-            case h: FrameHeader if h.mask.isDefined && !server                                      => pushProtocolError()
-            case h: FrameHeader if h.rsv1 || h.rsv2 || h.rsv3                                       => pushProtocolError()
-            case FrameHeader(op, _, length, fin, _, _, _) if op.isControl && (length > 125 || !fin) => pushProtocolError()
+            case h: FrameHeader if h.mask.isDefined && !server => pushProtocolError()
+            case h: FrameHeader if h.rsv1 || h.rsv2 || h.rsv3  => pushProtocolError()
+            case FrameHeader(op, _, length, fin, _, _, _) if op.isControl && (length > 125 || !fin) =>
+              pushProtocolError()
             case h: FrameHeader if h.opcode.isControl =>
               if (start.isFullMessage) handleControlFrame(h.opcode, start.data, this)
               else collectControlFrame(start, this)
@@ -111,7 +113,8 @@ private[http] object FrameHandler {
             throw new IllegalStateException("Expected FrameStart")
         }
 
-        private class ControlFrameDataHandler(opcode: Opcode, _data: ByteString, nextHandler: InHandler) extends FrameHandler {
+        private class ControlFrameDataHandler(
+            opcode: Opcode, _data: ByteString, nextHandler: InHandler) extends FrameHandler {
           var data = _data
 
           override def handleFrameData(data: FrameData): Unit = {
@@ -140,8 +143,8 @@ private[http] object FrameHandler {
                 push(out, PeerClosed.parse(data))
               case Opcode.Other(o) => closeWithCode(Protocol.CloseCodes.ProtocolError, "Unsupported opcode")
               case other => failStage(
-                new IllegalStateException(s"unexpected message of type [${other.getClass.getName}] when expecting ControlFrame")
-              )
+                  new IllegalStateException(
+                    s"unexpected message of type [${other.getClass.getName}] when expecting ControlFrame"))
             }
           }
 
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameLogger.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameLogger.scala
index cbeee7463..8f1a80ef6 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameLogger.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameLogger.scala
@@ -20,7 +20,8 @@ import akka.util.ByteString
 private[ws] object FrameLogger {
   final val maxBytes = 16
 
-  def logFramesIfEnabled(shouldLog: Boolean): BidiFlow[FrameEventOrError, FrameEventOrError, FrameEvent, FrameEvent, NotUsed] =
+  def logFramesIfEnabled(
+      shouldLog: Boolean): BidiFlow[FrameEventOrError, FrameEventOrError, FrameEvent, FrameEvent, NotUsed] =
     if (shouldLog) bidi
     else BidiFlow.identity
 
@@ -33,7 +34,8 @@ private[ws] object FrameLogger {
   def logEvent(frameEvent: FrameEventOrError): String = {
     import Console._
 
-    def displayLogEntry(frameType: String, length: Long, data: String, lastPart: Boolean, flags: Option[String]*): String = {
+    def displayLogEntry(
+        frameType: String, length: Long, data: String, lastPart: Boolean, flags: Option[String]*): String = {
       val f = if (flags.nonEmpty) s" $RED${flags.flatten.mkString(" ")}" else ""
       val l = if (length > 0) f" $YELLOW$length%d bytes" else ""
       f"$GREEN$frameType%s$f$l$RESET $data${if (!lastPart) " ..." else ""}"
@@ -50,8 +52,9 @@ private[ws] object FrameLogger {
     }
 
     frameEvent match {
-      case f @ FrameStart(header, data) => displayLogEntry(header.opcode.short, header.length, hex(data), f.lastPart, flag(header.fin, "FIN"), flag(header.rsv1, "RSV1"), flag(header.rsv2, "RSV2"), flag(header.rsv3, "RSV3"))
-      case FrameData(data, lastPart)    => displayLogEntry("DATA", 0, hex(data), lastPart)
+      case f @ FrameStart(header, data) => displayLogEntry(header.opcode.short, header.length, hex(data), f.lastPart,
+          flag(header.fin, "FIN"), flag(header.rsv1, "RSV1"), flag(header.rsv2, "RSV2"), flag(header.rsv3, "RSV3"))
+      case FrameData(data, lastPart) => displayLogEntry("DATA", 0, hex(data), lastPart)
       case FrameError(ex) =>
         f"${RED}Error: ${ex.getMessage}$RESET"
     }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameOutHandler.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameOutHandler.scala
index 66c8157c9..4ff9be9e9 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameOutHandler.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameOutHandler.scala
@@ -23,7 +23,7 @@ import scala.concurrent.duration.{ Deadline, FiniteDuration }
  */
 @InternalApi
 private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDuration, log: LoggingAdapter)
-  extends GraphStage[FlowShape[FrameOutHandler.Input, FrameStart]] {
+    extends GraphStage[FlowShape[FrameOutHandler.Input, FrameStart]] {
   val in = Inlet[FrameOutHandler.Input]("FrameOutHandler.in")
   val out = Outlet[FrameStart]("FrameOutHandler.out")
 
@@ -39,11 +39,13 @@ private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDu
     private object Idle extends InHandler with ProcotolExceptionHandling {
       override def onPush() =
         grab(in) match {
-          case start: FrameStart   => push(out, start)
-          case DirectAnswer(frame) => push(out, frame)
+          case start: FrameStart                                                     => push(out, start)
+          case DirectAnswer(frame)                                                   => push(out, frame)
           case PeerClosed(code, reason) if !code.exists(Protocol.CloseCodes.isError) =>
             // let user complete it, FIXME: maybe make configurable? immediately, or timeout
-            setHandler(in, new WaitingForUserHandlerClosed(FrameEvent.closeFrame(code.getOrElse(Protocol.CloseCodes.Regular), reason)))
+            setHandler(in,
+              new WaitingForUserHandlerClosed(FrameEvent.closeFrame(code.getOrElse(Protocol.CloseCodes.Regular),
+                reason)))
             pull(in)
           case PeerClosed(code, reason) =>
             val closeFrame = FrameEvent.closeFrame(code.getOrElse(Protocol.CloseCodes.Regular), reason)
@@ -104,12 +106,14 @@ private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDu
     /**
      * we have sent out close frame and wait for peer to sent its close frame
      */
-    private class WaitingForPeerCloseFrame(deadline: Deadline = closeDeadline()) extends InHandler with ProcotolExceptionHandling {
+    private class WaitingForPeerCloseFrame(deadline: Deadline = closeDeadline()) extends InHandler
+        with ProcotolExceptionHandling {
       override def onPush() =
         grab(in) match {
           case Tick =>
             if (deadline.isOverdue()) {
-              if (log.isDebugEnabled) log.debug(s"Peer did not acknowledge CLOSE frame after ${_closeTimeout}, closing underlying connection now.")
+              if (log.isDebugEnabled) log.debug(
+                s"Peer did not acknowledge CLOSE frame after ${_closeTimeout}, closing underlying connection now.")
               completeStage()
             } else pull(in)
           case PeerClosed(code, reason) =>
@@ -125,12 +129,14 @@ private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDu
     /**
      * Both side have sent their close frames, server should close the connection first
      */
-    private class WaitingForTransportClose(deadline: Deadline = closeDeadline()) extends InHandler with ProcotolExceptionHandling {
+    private class WaitingForTransportClose(deadline: Deadline = closeDeadline()) extends InHandler
+        with ProcotolExceptionHandling {
       override def onPush() = {
         grab(in) match {
           case Tick =>
             if (deadline.isOverdue()) {
-              if (log.isDebugEnabled) log.debug(s"Peer did not close TCP connection after sendind CLOSE frame after ${_closeTimeout}, closing underlying connection now.")
+              if (log.isDebugEnabled) log.debug(
+                s"Peer did not close TCP connection after sendind CLOSE frame after ${_closeTimeout}, closing underlying connection now.")
               completeStage()
             } else pull(in)
           case _ => pull(in) // ignore
@@ -139,7 +145,8 @@ private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDu
     }
 
     /** If upstream has already failed we just wait to be able to deliver our close frame and complete */
-    private class SendOutCloseFrameAndComplete(closeFrame: FrameStart) extends InHandler with OutHandler with ProcotolExceptionHandling {
+    private class SendOutCloseFrameAndComplete(closeFrame: FrameStart) extends InHandler with OutHandler
+        with ProcotolExceptionHandling {
       override def onPush() =
         fail(out, new IllegalStateException("Didn't expect push after completion"))
 
@@ -174,9 +181,10 @@ private[http] class FrameOutHandler(serverSide: Boolean, _closeTimeout: FiniteDu
     // init handlers
 
     setHandler(in, Idle)
-    setHandler(out, new OutHandler {
-      override def onPull(): Unit = pull(in)
-    })
+    setHandler(out,
+      new OutHandler {
+        override def onPull(): Unit = pull(in)
+      })
 
   }
 }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Handshake.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Handshake.scala
index 26d57bbaa..d841a0fcc 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Handshake.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Handshake.scala
@@ -30,6 +30,7 @@ private[http] object Handshake {
   val CurrentWebSocketVersion = 13
 
   object Server {
+
     /**
      *  Validates a client WebSocket handshake. Returns either `OptionVal.Some(UpgradeToWebSocketLowLevel)` or
      *  `OptionVal.None`
@@ -73,7 +74,8 @@ private[http] object Handshake {
      *        cookies or request authentication to a server.  Unknown header
      *        fields are ignored, as per [RFC2616].
      */
-    def websocketUpgrade(headers: List[HttpHeader], hostHeaderPresent: Boolean, settings: WebSocketSettings, log: LoggingAdapter): OptionVal[UpgradeToWebSocketLowLevel] = {
+    def websocketUpgrade(headers: List[HttpHeader], hostHeaderPresent: Boolean, settings: WebSocketSettings,
+        log: LoggingAdapter): OptionVal[UpgradeToWebSocketLowLevel] = {
 
       // notes on Headers that re REQUIRE to be present here:
       // - Host header is validated in general HTTP logic
@@ -114,17 +116,21 @@ private[http] object Handshake {
           val header = new UpgradeToWebSocketLowLevel {
             def requestedProtocols: Seq[String] = clientSupportedSubprotocols
 
-            def handle(handler: Either[Graph[FlowShape[FrameEvent, FrameEvent], Any], Graph[FlowShape[Message, Message], Any]], subprotocol: Option[String]): HttpResponse = {
+            def handle(
+                handler: Either[Graph[FlowShape[FrameEvent, FrameEvent], Any], Graph[FlowShape[Message, Message], Any]],
+                subprotocol: Option[String]): HttpResponse = {
               require(
                 subprotocol.forall(chosen => clientSupportedSubprotocols.contains(chosen)),
                 s"Tried to choose invalid subprotocol '$subprotocol' which wasn't offered by the client: [${requestedProtocols.mkString(", ")}]")
               buildResponse(key.get, handler, subprotocol, settings, log)
             }
 
-            def handleFrames(handlerFlow: Graph[FlowShape[FrameEvent, FrameEvent], Any], subprotocol: Option[String]): HttpResponse =
+            def handleFrames(
+                handlerFlow: Graph[FlowShape[FrameEvent, FrameEvent], Any], subprotocol: Option[String]): HttpResponse =
               handle(Left(handlerFlow), subprotocol)
 
-            override def handleMessages(handlerFlow: Graph[FlowShape[Message, Message], Any], subprotocol: Option[String] = None): HttpResponse =
+            override def handleMessages(handlerFlow: Graph[FlowShape[Message, Message], Any],
+                subprotocol: Option[String] = None): HttpResponse =
               handle(Right(handlerFlow), subprotocol)
           }
           OptionVal.Some(header)
@@ -150,8 +156,10 @@ private[http] object Handshake {
           E914-47DA-95CA-C5AB0DC85B11", taking the SHA-1 hash of this
           concatenated value to obtain a 20-byte value and base64-
           encoding (see Section 4 of [RFC4648]) this 20-byte hash.
-    */
-    def buildResponse(key: `Sec-WebSocket-Key`, handler: Either[Graph[FlowShape[FrameEvent, FrameEvent], Any], Graph[FlowShape[Message, Message], Any]], subprotocol: Option[String], settings: WebSocketSettings, log: LoggingAdapter): HttpResponse = {
+     */
+    def buildResponse(key: `Sec-WebSocket-Key`,
+        handler: Either[Graph[FlowShape[FrameEvent, FrameEvent], Any], Graph[FlowShape[Message, Message], Any]],
+        subprotocol: Option[String], settings: WebSocketSettings, log: LoggingAdapter): HttpResponse = {
       val frameHandler = handler match {
         case Left(frameHandler) => frameHandler
         case Right(messageHandler) =>
@@ -161,11 +169,11 @@ private[http] object Handshake {
       HttpResponse(
         StatusCodes.SwitchingProtocols,
         subprotocol.map(p => `Sec-WebSocket-Protocol`(Seq(p))).toList :::
-          List(
-            UpgradeHeader,
-            ConnectionUpgradeHeader,
-            `Sec-WebSocket-Accept`.forKey(key),
-            UpgradeToOtherProtocolResponseHeader(WebSocket.framing.join(frameHandler))))
+        List(
+          UpgradeHeader,
+          ConnectionUpgradeHeader,
+          `Sec-WebSocket-Accept`.forKey(key),
+          UpgradeToOtherProtocolResponseHeader(WebSocket.framing.join(frameHandler))))
     }
   }
 
@@ -175,14 +183,15 @@ private[http] object Handshake {
     /**
      * Builds a WebSocket handshake request.
      */
-    def buildRequest(uri: Uri, extraHeaders: immutable.Seq[HttpHeader], subprotocols: Seq[String], random: Random): (HttpRequest, `Sec-WebSocket-Key`) = {
+    def buildRequest(uri: Uri, extraHeaders: immutable.Seq[HttpHeader], subprotocols: Seq[String], random: Random)
+        : (HttpRequest, `Sec-WebSocket-Key`) = {
       val keyBytes = new Array[Byte](16)
       random.nextBytes(keyBytes)
       val key = `Sec-WebSocket-Key`(keyBytes)
       val protocol =
         if (subprotocols.nonEmpty) `Sec-WebSocket-Protocol`(subprotocols) :: Nil
         else Nil
-      //version, protocol, extensions, origin
+      // version, protocol, extensions, origin
 
       val headers = Seq(
         UpgradeHeader,
@@ -197,7 +206,8 @@ private[http] object Handshake {
      * Tries to validate the HTTP response. Returns either Right(settings) or an error message if
      * the response cannot be validated.
      */
-    def validateResponse(response: HttpResponse, subprotocols: Seq[String], key: `Sec-WebSocket-Key`): Either[String, NegotiatedWebSocketSettings] = {
+    def validateResponse(response: HttpResponse, subprotocols: Seq[String], key: `Sec-WebSocket-Key`)
+        : Either[String, NegotiatedWebSocketSettings] = {
       /*
        From http://tools.ietf.org/html/rfc6455#section-4.1
 
@@ -239,7 +249,7 @@ private[http] object Handshake {
            not present in the client's handshake (the server has indicated a
            subprotocol not requested by the client), the client MUST _Fail
            the WebSocket Connection_.
-     */
+       */
 
       trait Expectation extends (HttpResponse => Option[String]) { outer =>
         def &&(other: HttpResponse => Option[String]): Expectation =
@@ -259,23 +269,26 @@ private[http] object Handshake {
         }
 
       def compare(candidate: HttpHeader, caseInsensitive: Boolean): Option[HttpHeader] => Boolean = {
-        case Some(`candidate`) if !caseInsensitive => true
+        case Some(`candidate`) if !caseInsensitive                                                              => true
         case Some(header) if caseInsensitive && candidate.value.toRootLowerCase == header.value.toRootLowerCase => true
-        case _ => false
+        case _                                                                                                  => false
       }
 
-      def headerExists(candidate: HttpHeader, showExactOther: Boolean = true, caseInsensitive: Boolean = false): Expectation =
-        check(_.headers.find(_.name == candidate.name))(compare(candidate, caseInsensitive), {
-          case Some(other) if showExactOther => s"response that was missing required `$candidate` header. Found `$other` with the wrong value."
-          case Some(_)                       => s"response with invalid `${candidate.name}` header."
-          case None                          => s"response that was missing required `${candidate.name}` header."
-        })
+      def headerExists(
+          candidate: HttpHeader, showExactOther: Boolean = true, caseInsensitive: Boolean = false): Expectation =
+        check(_.headers.find(_.name == candidate.name))(compare(candidate, caseInsensitive),
+          {
+            case Some(other) if showExactOther =>
+              s"response that was missing required `$candidate` header. Found `$other` with the wrong value."
+            case Some(_) => s"response with invalid `${candidate.name}` header."
+            case None    => s"response that was missing required `${candidate.name}` header."
+          })
 
       val expectations: Expectation =
         check(_.status)(_ == StatusCodes.SwitchingProtocols, "unexpected status code: " + _) &&
-          headerExists(UpgradeHeader, caseInsensitive = true) &&
-          headerExists(ConnectionUpgradeHeader, caseInsensitive = true) &&
-          headerExists(`Sec-WebSocket-Accept`.forKey(key), showExactOther = false)
+        headerExists(UpgradeHeader, caseInsensitive = true) &&
+        headerExists(ConnectionUpgradeHeader, caseInsensitive = true) &&
+        headerExists(`Sec-WebSocket-Accept`.forKey(key), showExactOther = false)
 
       expectations(response) match {
         case None =>
@@ -283,7 +296,9 @@ private[http] object Handshake {
 
           if (subprotocols.isEmpty && subs.isEmpty) Right(NegotiatedWebSocketSettings(None)) // no specific one selected
           else if (subs.nonEmpty && subprotocols.contains(subs.get)) Right(NegotiatedWebSocketSettings(Some(subs.get)))
-          else Left(s"response that indicated that the given subprotocol was not supported. (client supported: ${subprotocols.mkString(", ")}, server supported: $subs)")
+          else Left(
+            s"response that indicated that the given subprotocol was not supported. (client supported: ${subprotocols.mkString(
+                ", ")}, server supported: $subs)")
         case Some(problem) => Left(problem)
       }
     }
diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Masking.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Masking.scala
index a0d5e9cc4..957706091 100644
--- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Masking.scala
+++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/Masking.scala
@@ -19,7 +19,8 @@ import akka.stream.stage._
  */
 @InternalApi
 private[http] object Masking {
-  def apply(serverSide: Boolean, maskRandom: () => Random): BidiFlow[ /* net in */ FrameEvent, /* app out */ FrameEventOrError, /* app in */ FrameEvent, /* net out */ FrameEvent, NotUsed] =
+  def apply(serverSide: Boolean, maskRandom: () => Random): BidiFlow[/* net in */ FrameEvent,
+    /* app out */ FrameEventOrError, /* app in */ FrameEvent, /* net out */ FrameEvent, NotUsed] =
     BidiFlow.fromFlowsMat(unmaskIf(serverSide), maskIf(!serverSide, maskRandom))(Keep.none)
 
   def maskIf(condition: Boolean, maskRandom: () => Random): Flow[FrameEvent, FrameEvent, NotUsed] =
@@ -63,51 +64,52 @@ private[http] object Masking {
     val out = Outlet[FrameEventOrError](s"${toString}-out")
     override val shape: FlowShape[FrameEvent, FrameEventOrError] = FlowShape(in, out)
... 44026 lines suppressed ...


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org


[incubator-pekko-http] 04/04: Fix paradox issues caused by scalafmt

Posted by md...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

mdedetrich pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-http.git

commit 664f044d77b3f96204a8a7ec4e96a4a272ca1752
Author: Matthew de Detrich <ma...@aiven.io>
AuthorDate: Mon Nov 14 11:49:20 2022 +0100

    Fix paradox issues caused by scalafmt
---
 project/ParadoxSupport.scala | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

diff --git a/project/ParadoxSupport.scala b/project/ParadoxSupport.scala
index d114d9432..639425668 100644
--- a/project/ParadoxSupport.scala
+++ b/project/ParadoxSupport.scala
@@ -31,15 +31,28 @@ object ParadoxSupport {
           case _                                   => sys.error("Source references are not supported")
         }
         val file = SourceDirective.resolveFile("signature", source, page.file, variables)
-        val Signature = """\s*((def|val|type) (\w+)(?=[:(\[]).*)(\s+\=.*)""".r // stupid approximation to match a signature
+
+        // The following are stupid approximation's to match a signature/s
+        val TypeSignature = """\s*(type (\w+)(?=[:(\[]).*)(\s+\=.*)""".r
         // println(s"Looking for signature regex '$Signature'")
-        val text =
-          Source.fromFile(file)(Codec.UTF8).getLines.collect {
-            case line @ Signature(signature, kind, l, definition) if labels contains l.toLowerCase() =>
-              // println(s"Found label '$l' with sig '$full' in line $line")
-              if (kind == "type") signature + definition
-              else signature
-          }.mkString("\n")
+        val lines = Source.fromFile(file)(Codec.UTF8).getLines.toList
+
+        val types = lines.collect {
+          case line @ TypeSignature(signature, l, definition) if labels contains l.toLowerCase() =>
+            // println(s"Found label '$l' with sig '$full' in line $line")
+            signature + definition
+        }
+
+        val Signature = """.*((def|val) (\w+)(?=[:(\[]).*)""".r
+
+        val other = lines.mkString.split("=").collect {
+          case line @ Signature(signature, kind, l) if labels contains l.toLowerCase() =>
+            // println(s"Found label '$l' with sig '$full' in line $line")
+            signature
+              .replaceAll("""\s{2,}""", " ") // Due to formatting with new lines its possible to have excessive whitespace
+        }
+
+        val text = (types ++ other).mkString("\n")
 
         if (text.trim.isEmpty) {
           ctx.error(


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org


[incubator-pekko-http] 01/04: Replace scalariform with scalafmt

Posted by md...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

mdedetrich pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-http.git

commit 25626716949ef520428912c661cd7a5041ff63e6
Author: Matthew de Detrich <ma...@aiven.io>
AuthorDate: Mon Nov 14 10:41:48 2022 +0100

    Replace scalariform with scalafmt
---
 .github/workflows/validate-and-test.yml |  4 +-
 .scalafmt.conf                          | 77 +++++++++++++++++++++++++++++++++
 CONTRIBUTING.md                         |  2 +-
 build.sbt                               |  3 --
 project/Formatting.scala                | 36 ---------------
 project/MultiNode.scala                 |  9 ++--
 project/plugins.sbt                     |  2 +-
 7 files changed, 84 insertions(+), 49 deletions(-)

diff --git a/.github/workflows/validate-and-test.yml b/.github/workflows/validate-and-test.yml
index 845040ec7..3887cc8fe 100644
--- a/.github/workflows/validate-and-test.yml
+++ b/.github/workflows/validate-and-test.yml
@@ -33,8 +33,8 @@ jobs:
           path: project/**/target
           key: build-target-${{ hashFiles('**/*.sbt', 'project/build.properties', 'project/**/*.scala') }}
 
-      - name: Autoformat
-        run: sbt +headerCreateAll +scalariformFormat +test:scalariformFormat
+      - name: Check code is formatted
+        run: sbt scalafmtCheckAll scalafmtSbtCheck headerCheckAll
 
       - name: Check for missing formatting
         run: git diff --exit-code --color
diff --git a/.scalafmt.conf b/.scalafmt.conf
new file mode 100644
index 000000000..efa297ee4
--- /dev/null
+++ b/.scalafmt.conf
@@ -0,0 +1,77 @@
+version                                  = 3.6.1
+runner.dialect                           = scala213
+project.git                              = true
+style                                    = defaultWithAlign
+docstrings.style                         = Asterisk
+docstrings.wrap                          = false
+indentOperator.preset                    = spray
+maxColumn                                = 120
+lineEndings                              = preserve
+rewrite.rules                            = [RedundantParens, SortImports, AvoidInfix]
+indentOperator.exemptScope               = all
+align.preset                             = some
+align.tokens."+"                         = [
+  {
+    code   = "~>"
+    owners = [
+      { regex = "Term.ApplyInfix" }
+    ]
+  }
+]
+literals.hexDigits                       = upper
+literals.hexPrefix                       = lower
+binPack.unsafeCallSite                   = always
+binPack.unsafeDefnSite                   = always
+binPack.indentCallSiteSingleArg          = false
+binPack.indentCallSiteOnce               = true
+newlines.avoidForSimpleOverflow          = [slc]
+newlines.source                          = keep
+newlines.beforeMultiline                 = keep
+align.openParenDefnSite                  = false
+align.openParenCallSite                  = false
+align.allowOverflow                      = true
+optIn.breakChainOnFirstMethodDot         = false
+optIn.configStyleArguments               = false
+danglingParentheses.preset               = false
+spaces.inImportCurlyBraces               = true
+rewrite.neverInfix.excludeFilters        = [
+  and
+  min
+  max
+  until
+  to
+  by
+  eq
+  ne
+  "should.*"
+  "contain.*"
+  "must.*"
+  in
+  ignore
+  be
+  taggedAs
+  thrownBy
+  synchronized
+  have
+  when
+  size
+  only
+  noneOf
+  oneElementOf
+  noElementsOf
+  atLeastOneElementOf
+  atMostOneElementOf
+  allElementsOf
+  inOrderElementsOf
+  theSameElementsAs
+  theSameElementsInOrderAs
+]
+rewriteTokens          = {
+  "⇒": "=>"
+  "→": "->"
+  "←": "<-"
+}
+project.excludeFilters = [
+  "scripts/authors.scala"
+]
+project.layout         = StandardConvention
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 770a1356f..4ecda37c0 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -283,7 +283,7 @@ the validator to test all projects.
 
 ### Scala style 
 
-Akka-http uses [Scalariform](https://github.com/scala-ide/scalariform) to enforce some of the code style rules.
+Akka-http uses [scalafmt](https://scalameta.org/scalafmt/) to enforce some of the code style rules.
 
 ### Java style
 
diff --git a/build.sbt b/build.sbt
index f16fb94d8..260d0c359 100644
--- a/build.sbt
+++ b/build.sbt
@@ -3,7 +3,6 @@ import akka.ValidatePullRequest._
 import AkkaDependency._
 import Dependencies.{h2specExe, h2specName}
 import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm
-import com.typesafe.sbt.SbtScalariform.ScalariformKeys
 import java.nio.file.Files
 import java.nio.file.attribute.{PosixFileAttributeView, PosixFilePermission}
 
@@ -34,7 +33,6 @@ inThisBuild(Def.settings(
     Tests.Argument(TestFrameworks.ScalaTest, "-oDF")
   ),
   Dependencies.Versions,
-  Formatting.formatSettings,
   shellPrompt := { s => Project.extract(s).currentProject.id + " > " },
   concurrentRestrictions in Global += Tags.limit(Tags.Test, 1),
   onLoad in Global := {
@@ -465,7 +463,6 @@ lazy val docs = project("docs")
       "github.base_url" -> GitHub.url(version.value, isSnapshot.value),
     ),
     apidocRootPackage := "akka",
-    Formatting.docFormatSettings,
     ValidatePR / additionalTasks += Compile / paradox,
     ThisBuild / publishRsyncHost := "akkarepo@gustav.akka.io",
     publishRsyncArtifacts := List((Compile / paradox).value -> gustavDir("docs").value),
diff --git a/project/Formatting.scala b/project/Formatting.scala
deleted file mode 100644
index 85e93a27c..000000000
--- a/project/Formatting.scala
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Copyright (C) 2017-2020 Lightbend Inc. <https://www.lightbend.com>
- */
-
-package akka
-
-import sbt._
-import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm
-import com.typesafe.sbt.SbtScalariform.ScalariformKeys
-
-object Formatting {
-  import scalariform.formatter.preferences._
-
-  lazy val formatSettings = Seq(
-    ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value),
-    Compile / ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value),
-    MultiJvm / ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value)
-  )
-
-  lazy val docFormatSettings = Seq(
-    ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value),
-    Compile / ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value),
-    Test / ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value),
-    MultiJvm / ScalariformKeys.preferences := setPreferences(ScalariformKeys.preferences.value)
-  )
-
-  def setPreferences(preferences: IFormattingPreferences) = preferences
-    .setPreference(RewriteArrowSymbols, true)
-    .setPreference(UseUnicodeArrows, false)
-    .setPreference(AlignParameters, true)
-    .setPreference(AlignSingleLineCaseStatements, true)
-    .setPreference(DoubleIndentConstructorArguments, false)
-    .setPreference(DoubleIndentMethodDeclaration, false)
-    .setPreference(DanglingCloseParenthesis, Preserve)
-    .setPreference(NewlineAtEndOfFile, true)
-}
diff --git a/project/MultiNode.scala b/project/MultiNode.scala
index be0251006..de6d6d924 100644
--- a/project/MultiNode.scala
+++ b/project/MultiNode.scala
@@ -4,9 +4,8 @@
 
 package akka
 
-import com.typesafe.sbt.{SbtMultiJvm, SbtScalariform}
+import com.typesafe.sbt.SbtMultiJvm
 import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys._
-import com.typesafe.sbt.SbtScalariform.ScalariformKeys
 import sbt._
 import sbt.Keys._
 
@@ -50,13 +49,11 @@ object MultiNode extends AutoPlugin {
 
   private val multiJvmSettings =
     SbtMultiJvm.multiJvmSettings ++
-    inConfig(MultiJvm)(SbtScalariform.configScalariformSettings) ++
-    Seq(
+    inConfig(MultiJvm)(Seq(
       MultiJvm / jvmOptions := defaultMultiJvmOptions,
-      MultiJvm / compile / compileInputs := ((MultiJvm / compile / compileInputs) dependsOn (MultiJvm / ScalariformKeys.format)).value,
       MultiJvm / scalacOptions := (Test / scalacOptions).value,
       MultiJvm / compile := ((MultiJvm / compile) triggeredBy (Test / compile)).value
-    ) ++
+    )) ++
     CliOptions.hostsFileName.map(MultiJvm / multiNodeHostsFileName := _) ++
     CliOptions.javaName.map(MultiJvm / multiNodeJavaName := _) ++
     CliOptions.targetDirName.map(MultiJvm / multiNodeTargetDirName := _) ++
diff --git a/project/plugins.sbt b/project/plugins.sbt
index 8e86d0e95..383dba0c7 100644
--- a/project/plugins.sbt
+++ b/project/plugins.sbt
@@ -8,7 +8,7 @@ resolvers += Resolver.jcenterRepo
 
 addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.4.0")
 addSbtPlugin("com.typesafe" % "sbt-mima-plugin" % "1.1.0")
-addSbtPlugin("org.scalariform" % "sbt-scalariform" % "1.8.3")
+addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.4.6")
 addSbtPlugin("com.dwijnand" % "sbt-dynver" % "4.1.1")
 addSbtPlugin("com.github.sbt" % "sbt-unidoc" % "0.5.0")
 addSbtPlugin("com.thoughtworks.sbt-api-mappings" % "sbt-api-mappings" % "3.0.2")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org


[incubator-pekko-http] 02/04: Add .gitattributes to enforce unix line endings

Posted by md...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

mdedetrich pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-pekko-http.git

commit 2db062119f8d95f21cbff410ce9c5e48b25dfb0b
Author: Matthew de Detrich <ma...@aiven.io>
AuthorDate: Mon Nov 14 10:45:15 2022 +0100

    Add .gitattributes to enforce unix line endings
---
 .gitattributes | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 000000000..9dde9b976
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,5 @@
+# Activate line ending normalization, setting eol will make the behavior match core.autocrlf = input
+* text=auto eol=lf
+# Force batch scripts to always use CRLF line endings
+*.{cmd,[cC][mM][dD]} text eol=crlf
+*.{bat,[bB][aA][tT]} text eol=crlf


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@pekko.apache.org
For additional commands, e-mail: commits-help@pekko.apache.org