Quantcast
Channel: Hacker News
Viewing all articles
Browse latest Browse all 25817

Google Cloud HTTP(s) Load Balancer Gains WebSockets Support

$
0
0

Google Cloud Platform (GCP) HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances. You can configure URL rules that route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, provided that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.

HTTP requests can be load balanced based on port 80 or port 8080. HTTPS requests can be load balanced on port 443.

The load balancer acts as an HTTP/2 to HTTP/1.1 translation layer, which means that the web servers always see and respond to HTTP/1.1 requests, but that requests from the browser can be HTTP/1.0, HTTP/1.1, or HTTP/2.

Before you begin

HTTP(S) load balancing uses instance groups to organize instances. Make sure you are familiar with instance groups before you use load balancing.

Example configurations

If you want to jump right in and build a working load balancer for testing, the following guides demonstrate two different scenarios using the HTTP(S) load balancing service. These scenarios provide a practical context for HTTP(S) load balancing and demonstrate how you might set up load balancing for your specific needs.

The rest of this page digs into more detail about how load balancers are constructed and how they work.

Creating a cross-region load balancer

Representation of
  cross-region load balancing

You can use a global IP address that can intelligently route users based on proximity. For example, if you set up instances in North America, Europe, and Asia, users around the world will be automatically sent to the backends closest to them, assuming those instances have enough capacity. If the closest instances do not have enough capacity, cross-region load balancing automatically forwards users to the next closest region.

Get started with cross-region load balancing

Creating a content-based load balancer

Representation of
  content-based load balancing

Content-based or content-aware load balancing uses HTTP(S) load balancing to distribute traffic to different instances based on the incoming HTTP(S) URL. For example, you can set up some instances to handle your video content and another set to handle everything else. You can configure your load balancer to direct traffic for example.com/video to the video servers and example.com/ to the default servers.

Get started with content-based load balancing

You can also use HTTP(S) load balancing with Google Cloud Storagebuckets. Once you have your content-based load balancer set up, you canadd a Cloud Storage bucket to your load balancer.

Content-based and cross-region load-balancing can work together by using multiple backend services and multiple regions. You can build on top of the scenarios above to configure your own load balancing configuration that meets your needs.

Fundamentals

Overview

An HTTP(S) load balancer is composed of several components. The following diagram illustrates the architecture of a complete HTTP(S) load balancer:

Cross-region load balancing diagram (click to enlarge)

The following sections describe how each component works together to make up each type of load balancer. For a detailed description of each component, seeComponents below.

HTTP load balancing

A complete HTTP load balancer is structured as follows:

  1. A global forwarding rule directs incoming requests to a target HTTP proxy.
  2. The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request.
  3. The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends. The health of each backend instance is verified using either an HTTP health check or an HTTPS health check. If the backend service is configured to use the latter, the request will be encrypted on its way to the backend instance.

HTTPS load balancing

An HTTPS load balancer shares the same basic structure as an HTTP load balancer (described above), but differs in the following ways:

  • Uses a target HTTPS proxy instead of a target HTTP proxy
  • Requires a signed SSL certificate for the load balancer
  • The client SSL session terminates at the load balancer. Sessions between the load balancer and the instance can either be HTTPS (recommended) or HTTP. If HTTPS, each instance must have a certificate.

Components

Global forwarding rules and addresses

Global forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services.

Each global forwarding rule provides a single global IP address that can be used in DNS records for your application. No DNS-based load balancing is required. You can either specify the IP address to be used or let Google Compute Engine assign one for you.

Target proxies

Target proxies terminate HTTP(S) connections from clients, and are referenced by one or more global forwarding rules and route the incoming requests to a URL map.

The proxies set HTTP request/response headers as follows:

  • Via: 1.1 google (requests and responses)
  • X-Forwarded-Proto: [http | https] (requests only)
  • X-Forwarded-For: <unverified IP(s)>, <immediate client IP>,<global forwarding rule external IP>, <proxies running in GCP> (requests only)
    A comma-separated list of IP addresses appended by the intermediaries the request traveled through. If you are running proxies inside GCP that append data to the X-forward-For header, then your software must take into account the existence and number of those proxies. Only the<immediate client IP> and <global forwarding rule external IP> entries are provided by the load balancer. All other entries in the list are passed along without verification. The<immediate client IP> entry is the client that connected directly to the load balancer. The <global forwarding rule external IP> entry is the external IP address of the load balancer's forwarding rule. If there are more entries than that, then the first entry in the list is the address of the original client. Other entries before the<immediate client IP> entry represent other proxies that forwarded the request along to the load balancer.
  • X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
    Parameters for Stackdriver Trace.

URL maps

URL maps define matching patterns for URL-based routing of requests to the appropriate backend services. A default service is defined to handle any requests that do not match a specified host rule or path matching rule. In some situations, such as thecross-region load balancing example, you might not define any URL rules and rely only on the default service. For content-based routing of traffic, the URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends.

SSL certificates

SSL certificates are used by target HTTPS proxies to securely route incoming HTTPS requests to backend services defined in a URL map.

Backend services

Backend services direct incoming traffic to one or more attached backends. Each backend is composed of aninstance group and additional serving capacity metadata. Backend serving capacity can be based on CPU orrequests per second (RPS).

Each backend service also specifies whichhealth checks will be performed against the available instances.

HTTP(S) load balancing supportsCompute Engine Autoscaler, which allows users to perform autoscaling on the instance groups in a backend service. For more information, seeScaling Based on HTTP load balancing serving capacity.

You can enable connection draining on backend services to ensure minimal interruption to your users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler. To learn more about connection draining, read theEnabling Connection Draining documentation.

Backend buckets

Backend buckets direct incoming traffic to Google Cloud Storagebuckets. SeeAdding a Cloud Storage bucket to content-based load balancing for an example of adding buckets to an existing load balancer setup.

Firewall rules

You must create afirewall rule that allows traffic from130.211.0.0/22 and 35.191.0.0/16 to reach your instances. This rule allows traffic from both the load balancer and the health checker. The rule must allow traffic on the port your global forwarding rule has been configured to use, and your health checker should be configured to use the same port. If your health checker uses a different port, then you must create another firewall rule for that port.

Note that firewall rules block and allow traffic at the instance level, not at the edges of the network. They cannot prevent traffic from reaching the load balancer itself.

Load distribution algorithm

HTTP(S) load balancing provides two methods of determining instance load. Within the backend service object, the balancingMode property selects between therequests per second (RPS) and CPU utilization modes. Both modes allow a maximum value to be specified; the HTTP load balancer will try to ensure that load remains under the limit, but short bursts above the limit can occur during failover or load spike events.

Incoming requests are sent to the region closest to the user that has remaining capacity. If more than one zone is configured with backends in a region, the traffic is distributed across the instance groups in each zone according to each group's capacity. Within the zone, the requests are spread evenly over the instances using a round-robin algorithm. Round-robin distribution can be overridden by configuringsession affinity.

Session affinity

Session affinity sends all request from the same client to the same virtual machine instance as long as the instance stays healthy and has capacity.

GCP HTTP(S) Load Balancing offers two types of session affinity:

WebSocket proxy support

The HTTP(S) load balancer has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability. The load balancer does not need any additional configuration to proxy WebSocket connections.

The WebSocket protocol, which is defined in RFC6455, provides a full-duplex communication channel between clients and servers. The channel is initiated from an HTTP(S) request.

When the HTTP(S) load balancer recognizes a WebSocket Upgrade request from an HTTP(S) client and the request is followed by a successful Upgrade response from the backend instance, the load balancer proxies bidirectional traffic for the duration of the current connection. If the backend does not return a successful Upgrade response, the load balancer closes the connection.

If you have configured either client IP or generated cookie session affinity for your HTTP(S) load balancer, all WebSocket connections from a client are sent to the same backend instance, provided the instance continues to pass health checks and has capacity.

Interfaces

Your HTTP(S) load balancing service can be configured and updated through the following interfaces:

  • The gcloud command-line tool: a command-line tool included in theCloud SDK. The HTTP(S) load balancing documentation calls on this tool frequently to accomplish tasks. For a complete overview of the tool, see the gcloud Tool Guide. You can find commands related to load balancing in thegcloud compute command group.

    You can also get detailed help for any gcloud command by using the --help flag:

    gcloud compute http-health-checks create --help
  • The Google Cloud Platform Console: Load balancing tasks can be accomplished through theGoogle Cloud Platform Console.

  • The REST API: All load balancing tasks can be accomplished using the Google Compute Engine API. TheAPI reference docs describe the resources and methods available to you.

TLS support

A HTTPS target proxy accepts only TLS 1.0, 1.1, and 1.2 when terminating client SSL requests. It speaks only TLS 1.0, 1.1, and 1.2 to the backend service when the backend protocol is HTTPS.

Illegal request handling

The HTTP(S) load balancer blocks client requests from reaching the backend for a number of reasons: some strictly for HTTP/1.1 compliance and others to avoid unexpected data being passed to the backends.

The load balancer blocks the following for HTTP/1.1 compliance:

  • It cannot parse the first line of the request.
  • A header is missing the : delimiter.
  • Headers or the first line contain invalid characters.
  • The content length is not a valid number, or there are multiple content length headers.
  • There are multiple transfer encoding keys, or there are unrecognized transfer encoding values.
  • There's a non-chunked body and no content length specified.
  • Body chunks are unparseable. This is the only case where some data will make it to the backend. The load balancer will close the connections to client and backend when it receives an unparseable chunk.

The load balancer also blocks the request if any of the following are true:

  • The combination of request URL and headers is longer than about 15KB.
  • The request method does not allow a body, but the request has one.
  • The request contains an upgrade header.
  • The HTTP version is unknown.

Logging

Each HTTP(S) request is logged temporarily viaStackdriver Logging. If you have been accepted into the Alpha testing phase, logging is automatic and does not need to be enabled.

How to view logs

To view logs, go to theLogs Viewer in the Cloud Platform Console.

HTTP(S) logs are indexed first byforwarding rule, then by URL map.

  • To see all logs, in the first pull-down menu select Load Balancing > All forwarding rules.
  • To see logs for just one forwarding rule, select a single forwarding rule name from the list.
  • To see logs for just one URL map used by a forwarding rule, selectLoad Balancing and choose the forwarding rule and URL map of interest.

Log fields of type boolean typically only appear if they have a value of true. If a boolean field has a value of false, that field is omitted from the log.

UTF-8 encoding is enforced for log fields. Characters that are not UTF-8 characters are replaced with question marks.

What is logged

HTTP(S) load balancing log entries contain information useful for monitoring and debugging your HTTPS(S) traffic. Log entries contain the following types of information:

  • General information shown in most GCP logs, such as severity, project ID, project number, timestamp, and so on.
  • HttpRequest log fields.
  • a statusDetails field inside the structPayload. This field holds a string that explains why the load balancer returned the HTTP status that it did. The tables below contain further explanations of these log strings.

statusDetail HTTP success messages

statusDetails (successful)MeaningCommon Accompanying Response Codes
response_from_cacheThe HTTP request was served from cache.Any cachable response code is possible.
response_from_cache_validatedThe return code was set from a cached entry that was validated by a backend.Any cachable response code is possible.
response_sent_by_backendThe HTTP request was proxied successfully to the backend.Returned from VM backend - any response code is possible.

statusDetail HTTP failure messages

statusDetails (failure)MeaningCommon Accompanying Response Codes
aborted_request_due_to_backend_early_responseA request with body was aborted due to backend sending an early response with an error code. The response was forwarded to the client. The request was terminated.4XX or 5XX
backend_503_propagated_as_errorThe backend sent a 503 that the load balancer could not recover from with retries.503
backend_connection_closed_after_partial_response_sentThe backend connection closed unexpectedly after a partial response had been sent to the client. Returned from VM backend - any response code is possible. A 0 indicates the backend did not send full response headers.
backend_connection_closed_before_data_sent_to_clientThe backend unexpectedly closed its connection to the load balancer before the response was proxied to the client.502
backend_early_response_with_non_error_statusThe backend sent a non-error response (1XX or 2XX) to an HTTP POST/PUT request before receiving the whole request body.502
backend_response_corruptedThe HTTP response body sent by the backend has invalid chunked transfer-encoding or is otherwise corrupted.Any response code possible depending on the nature of the corruption. Often 502.
backend_timeoutThe backend timed out while generating a response.502
body_not_allowedThe client sent a HTTP request with a body, but the HTTP method used does not allow a body.400
cache_lookup_failed_after_partial_responseThe load balancer failed to serve a full response from cache due to an internal error.2XX
client_disconnected_after_partial_responseThe connection to the client was broken after the load balancer sent a partial response.Returned from the VM backend - any response code is possible.
client_disconnected_before_any_responseThe connection to the client was broken before the load balancer sent any response.0
client_timed_outThe load balancer idled out the client connection due to lack of progress while proxying either the request or response.0
error_uncompressing_gzipped_bodyThere was an error uncompressing a gzipped HTTP response.503
failed_to_connect_to_backendThe load balancer failed to connect to the backend.502
failed_to_pick_backendThe load balancer failed to pick a healthy backend to handle the request.502
headers_too_longThe request headers were larger than the maximum allowed.413
http_version_not_supportedHTTP version not supported. Currently only HTTP 0.9, 1.0, 1.1, and 2.0 are supported.400
http2_server_push_canceled_invalid_response_codeThe load balancer canceled the HTTP/2 server push because the backend returned an invalid response code.Can only happen when using http2 to the backend. Client will receive a RST_STREAM containing INTERNAL_ERROR.
internal_errorInternal error at the load balancer.400
invalid_http2_client_header_formatThe HTTP/2 headers from client are invalid.400
malformed_chunked_bodyThe request body was improperly chunk encoded.411
required_body_but_no_content_lengthThe HTTP request requires a body but the request headers did not include a content length or transfer-encoding chunked header.400 or 403
secure_url_rejectedA request with a https:// URL was received over a plaintext HTTP/1.1 connection.400
unsupported_methodThe client supplied an unsupported HTTP request method.400
upgrade_header_rejectedThe client HTTP request contained the Upgrade header and was refused.400
uri_too_longThe HTTP request URI was longer than the maximum allowed length.414
websocket_handshake_failedThe websocket handshake failed.501

Notes and Restrictions

  • HTTP(S) load balancing supports the HTTP/1.1 100 Continue response.
  • If your load balanced instances are running apublic operating system image supplied by Compute Engine, then firewall rules in the operating system will be configured automatically to allow load balanced traffic. If you are using a custom image, you have to configure the operating system firewall manually. This is separate from the GCP firewall rule that must be created as part of configuring an HTTP(S) load balancer.
  • Load balancing does not keep instances in sync. You must set up your own mechanisms, such as using Deployment Manager, for ensuring that your instances have consistent configurations and data.
  • The HTTP(S) load balancer does not support sending an HTTP DELETE with a body to the load balancer. Such requests will receive an error message:Error 400 (Bad Request)!! Your client has issued a malformed or illegal request. Only DELETE requests without bodies are supported.

Troubleshooting

Load balanced traffic does not have a source address of the original client

Traffic from the load balancer to your instances has an IP address in the ranges of 130.211.0.0/22 and 35.191.0.0/16. When viewing logs on your load balanced instances, you will not see the source address of the original client. Instead, you will see source addresses from this range.

Getting a permission error when trying to view an object in my Cloud Storage bucket

In order to serve objects through load balancing, the Cloud Storage objects must be publicly accessible. Make sure to update the permissions of the objects being served so they arepublicly readable.

URL doesn’t serve expected Cloud Storage object

The Cloud Storage object to serve is determined based on your URL map and the URL that you request. If the request path maps to a backend bucket in your URL map, the Cloud Storage object is determined by appending the full request path onto the Cloud Storage bucket that the URL map specifies.

For example, if you map /static/* to gs://[EXAMPLE_BUCKET], the request tohttps://<GCLB IP or Host>/static/path/to/content.jpg will try to servegs://[EXAMPLE_BUCKET]/static/path/to/content.jpg. If that object doesn’t exist, you will get the following error message instead of the object:

NoSuchKeyThe specified key does not exist.

Viewing all articles
Browse latest Browse all 25817

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>