Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Show HN: Design and Simulate Molecules in Virtual Reality

$
0
0

THIS END USER LICENSE AGREEMENT ("AGREEMENT") IS A LEGAL AGREEMENT
BETWEEN YOU ("CUSTOMER") AND NANOME, INC. ("SUPPLIER"). BY CLICKING THE "I
ACCEPT" BUTTON OR EXECUTING AN ORDER FORM THAT INCLUDES THIS
AGREEMENT BY REFERENCE, CUSTOMER ACKNOWLEDGES THAT CUSTOMER HAS
REVIEWED AND ACCEPTS THIS AGREEMENT. IF YOU ARE AGREEING TO THIS
AGREEMENT AS AN INDIVIDUAL, “CUSTOMER” REFERS TO YOU INDIVIDUALLY. IF YOU
ARE AGREEING TO THIS AGREEMENT AS A REPRESENTATIVE OF AN ENTITY, YOU
REPRESENT THAT YOU HAVE THE AUTHORITY TO BIND THAT ENTITY AND
“CUSTOMER” REFERS TO THAT ENTITY AND ALL THE USERS SPECIFIED IN THE ORDER
FORM. IF CUSTOMER DOES NOT AGREE WITH ALL THE TERMS OF THIS AGREEMENT,
DO NOT DOWNLOAD OR OTHERWISE USE THE SOFTWARE REFERENCED IN THE
ORDER FORM.
CUSTOMER’S USE OF THE SOFTWARE REQUIRES REGISTRATION FOR AN ACCOUNT
AT WWW.NANOME.AI AND ACCEPTANCE OF THE NANOME TERMS OF SERVICE
LOCATED AT WWW.NANOME.AI/TERMS AND THE NANOME PRIVACY POLICY LOCATED
AT WWW.NANOME.AI/PRIVACY WHICH SOLELY GOVERNS CUSTOMER’S USE OF ALL
NANOME MOBILE APPLICATIONS, AND CLOUD OR SAAS SERVICES, INCLUDING THE
NANOME MARKETPLACE AND VIRTUAL ROOMS. CUSTOMER HEREBY CONFIRMS THAT
IT HAS READ AND UNDERSTANDS THE NANOME TERMS OF SERVICE AND THE
NANOME PRIVACY POLICY.
1. DEFINITIONS.
1.1. ​“Affiliate” means any entity, now or hereafter existing (so long as such entity does not
have its own agreement with Supplier for use of the Software or access and use of the
Supplier’s Services) that directly or indirectly, through one or more intermediaries, controls, is
controlled by, or is under common control with the subject entity. For purposes of this definition,
“control” means direct or indirect possession of the power to direct or cause the direction of the
management and policies of an entity, whether through the ownership of voting securities, by
contract or otherwise. An entity shall be considered an “Affiliate” only so long as that entity
meets the foregoing definition.
1.2. ​“Ancillary Services” means implementation, training or consulting services that
Supplier may perform as described in a SOW executed by the parties.
1.3. ​“Authorized Purposes” means Customer’s internal business purposes if the License
Term is not for an Evaluation License. If the License Term is for an Evaluation License, then
“Authorized Purposes” means Customer’s internal testing and evaluation use only. If the
License Term is for an Educational License, then “Authorized Purposes” means Customer’s
educational, research or training purposes use only.
1.4. ​“Beta License” means a license granted to Customer with respect to a pre-release
version of the Software for the period specified in the Order Form, and that is not supported,
may contain bugs or errors (but shall not knowingly contain any undisclosed Malicious Code),
and may be subject to additional terms that shall be provided by Supplier to Customer.
1.5. ​“Customer Data” means all data submitted, stored, posted, displayed, or otherwise
transmitted by or on behalf of Customer or any User and received and analyzed by the
Software.
1.6. ​“Customer System” means Customer’s internal website(s), servers and other
equipment and software, including, without limitation, mobile devices and virtual reality
hardware systems.
1.7. ​“Delivery Date” means the date, set forth in the applicable Order Form, on which the
Software is scheduled to be made available to Customer.
1.8. ​“Documentation” means the printed, paper, electronic or online user instructions and
help files made generally available by Supplier for use with the Software, as may be updated
from time to time by Supplier.
1.9. ​“Educational License” means a license with respect to the Software that has been
designated as such by Supplier to an educational institution, a student, a training facility or other
person or entity for educational, research or training purposes for the applicable License Term
and that may be subject to additional terms that shall be provided by Supplier to Customer.
1.10. ​“Evaluation License” means a non-production license granted to Customer with respect
to a Version of the Software for the applicable License Term and which may have limited
functionality or features.
1.11. ​“Free License” means a license to a Version of the Software that is provided free of
charge to Customer and contains a limited feature set.
1.12. ​“Intellectual Property Rights” means all intellectual property rights or similar proprietary
rights, including (a) patent rights and utility models, (b) copyrights and database rights, (c)
trademarks, trade names, domain names and trade dress and the goodwill associated
therewith, (d) trade secrets, (e) mask works, and (f) industrial design rights; in each case,
including any registrations of, applications to register, and renewals and extensions of, any of
the foregoing in any jurisdiction in the world.
1.13. ​“License Term” means the license period for Customer’s use of the Software set forth
in an Order Form and any renewals or extensions thereof. Unless otherwise specified in the
applicable Order Form, the License Term for an Evaluation License is limited to thirty (30) days
from the Delivery Date.
1.14. ​“Malicious Code” means viruses, worms, time bombs, Trojan horses and other harmful
or malicious code, files, scripts, agents or programs.
1.15. ​“Marketplace” means the virtual marketplace offered or maintained by Supplier or its
affiliates as the “Marketplace” which may include allowing customers to acquire for free or to
purchase license rights to Plugins, software, content and other virtual and digital assets from
Supplier and third parties.
1.16. ​“Non-GA Solutions” means Supplier products or services that are not generally
available to Supplier customers, including, without limitation, Beta Licenses, and that may be
subject to additional terms that shall be provided by Supplier to Customer.
1.17. ​“Open Source Software” means all software that is available under the GNU Affero
General Public License (AGPL), GNU General Public License (GPL), GNU Lesser General
Public License (LGPL), Mozilla Public License (MPL), Apache License, BSD licenses, or any
other license that approved by the Open Source Initiative (www.opensource.org).
1.18. ​“Order Form” means the ordering documents for Services and licenses for Software
purchased from Supplier that are entered into hereunder by the parties from time to time,
including modifications, supplements and addenda thereto. Order Forms are incorporated
herein. If there is any inconsistency or conflict between an Order Form and this Agreement, the
Agreement controls, unless the Order Form specifically identifies by Section reference the
provision that such Order Form is modifying, and then such change will apply for such Order
Form only. Affiliates of Customer may purchase Services and licenses for the Software subject
to this Agreement by executing Order Forms hereunder, and by executing an Order Form, that
Affiliate of Customer shall be bound by this Agreement as if it were an original party hereto.
1.19. ​“Plugins” means a software component offered or sold via a Marketplace that adds a
specific feature to the Software, whether developed by Supplier or a third party. Plugins may be
supplied to Customer subject to separate or additional terms and conditions provided to
Customer by Supplier at the time of acquisition.
1.20. ​“Services” means the Support Services and any Ancillary Services.
1.21. ​“SOW” means a written statement of work entered into and signed by the parties
describing Ancillary Services to be provided by Supplier to Customer.
1.22. ​“Software” means the Version of the software product (with the corresponding specific
features of such Version) and any Supplier Plugins specified in an Order Form and any Supplier
Updates that Supplier provides to Customer in accordance with Support Services that Customer
is entitled to receive pursuant to this Agreement, all in object code form only. For all purposes
of this Agreement, “Software” excludes any Open Source Software and all Third Party Offerings,
such as third party Plugins, software, content and other virtual and digital assets.
1.23. ​“Support Services” means the support and maintenance services offered by Supplier
and purchased by Customer pursuant to an Order Form.
1.24. ​“Third Party Offerings” means certain software or services delivered or performed by
third parties that are required for the operation of the Software, such as the Photon engine and
the Oculus Desktop Client, and any associated products provided by third parties, such as third
party Plugins, that interoperate with the Software.
1.25. ​“Updates” means bug fixes, patches and maintenance releases to the applicable
Version of the Software to the extent made generally available by Supplier to its licensees.
1.26. ​“Users” means Customer’s or its Affiliates’ employees, consultants, contractors, agents
and third parties with whom Customer may transact business and (a) for whom access to the
Software during a License Term have been purchased pursuant to an Order Form, (b) who are
authorized by Customer or its Affiliates to access and use the Software, and (c) where
applicable, who have been supplied user identifications and passwords for such purpose by
Customer (or by Supplier at Customer’s request).
1.27. ​“Version” means a particular version or edition of the Software with a particular
bundling of features and functionality associated with such version features in a manner that
provides substantial additional or lesser functionality with respect to the Software, the
Marketplace or other Nanome cloud or SaaS services, such as access to public and private
virtual rooms, the ability to view or interact within a virtual room, access and storage of public or
private designs and other virtual assets, etc. Different Versions of the Software consist of the
Pro version, the Plus version, the Free License, the Educational License, the Evaluation License
and the Beta License.
1.28. ​“Virtual Rooms” means the private or public virtual rooms offered and maintained by
Supplier or its affiliates as such which may include allowing customers to access and maintain
virtual rooms and other multi-user capabilities and functions with other users and permit users to
submit, upload and/or post information, opinions, messages, comments, virtual assets and other
content and material.
2. ORDERS; LICENSES; AND RESTRICTIONS.
2.1 Orders. Subject to the terms and conditions contained in this Agreement, Customer may
purchase licenses for Users to use the Software pursuant to Order Forms. Unless otherwise
specified in the applicable Order Form, (a) Software may be used by the users initially named
(which may be assigned as set forth below), and in no event more than the number of Users
specified, in the applicable Order Form, (b) an unlimited number of additional User licenses may
be added at any time during the applicable License Term at such pricing as shall be set forth in
the Order Form for the additional User licenses, and invoiced separately from the then-existing
User licenses for the remainder of such License Term, and (c) the added User licenses shall
terminate upon expiration of the License Term as the pre-existing User licenses. Unless
otherwise provided in the applicable Order Form, User licenses are for designated Users only
and cannot be shared or used by more than one User, but may be reassigned to new Users
replacing former Users who no longer require ongoing use of the Software. Customer agrees
that its purchases hereunder are neither contingent on the delivery of any future functionality or
features nor dependent on any oral or written public comments made by Supplier regarding any
future functionality or features.
2.2 License Grant. Subject to Customer’s compliance with the terms and conditions
contained in this Agreement, Supplier hereby grants to Customer, during the relevant License
Term, a limited, non-exclusive, non-assignable/non-transferable (except as expressly permitted
herein) right for its Users to use and reproduce the applicable Version of the Software specified
in the Order Form in accordance with the Documentation in each case solely for Customer’s
Authorized Purposes and not for the benefit of any other person or entity. Customer may make
a reasonable number of backup copies of the Software solely for Customer’s internal use
pursuant to the license granted in this Section. Customer’s use of the Software may be subject
to certain limitations, such as, for example, limits on storage capacity for Customer Data. Any
such limitations will be specified either in the Order Form or in the Documentation.
2.3 Restrictions. Customer shall not, directly or indirectly, and Customer shall not permit any
User or third party to: (a) reverse engineer, decompile, disassemble or otherwise attempt to
discover the source code or underlying ideas or algorithms of the Software; (b) modify,
translate, or create derivative works based on any element of the Software or any related
Documentation; (c) rent, lease, distribute, sell, resell, assign, or otherwise transfer its rights to
use the Software; (d) use the Software for timesharing purposes or otherwise for the benefit of
any person or entity other than for the benefit of Customer and Users; (e) remove any
proprietary notices from the Documentation; (f) publish or disclose to third parties any evaluation
or benchmarking of the Software without Supplier's prior written consent; (g) use the Software
for any purpose other than its intended purpose; (h) interfere with or disrupt the integrity or
performance of the Software; (i) introduce any Open Source Software into the Software; or (j)
attempt to gain unauthorized access to the Software or Supplier’s systems or networks.
2.4 Reservation of Rights. Except as expressly granted in this Agreement, there are no
other licenses granted to Customer, express, implied or by way of estoppel. All rights not
granted in this Agreement are reserved by Supplier.
3. THIRD PARTY OFFERINGS.
3.1 Use of Third Party Offerings. Supplier or third parties may from time to time make Third
Party Offerings available to Customer through the Marketplace, Virtual Rooms or otherwise.
Any acquisition by Customer of any such Third Party Offerings, and any exchange of data
between Customer and any provider of a Third Party Offering, is solely between Customer and
the applicable provider of the Third Party Offering. Supplier does not warrant or support any
Third Party Offering, whether or not they are available via a Supplier Marketplace or Virtual
Rooms or designated by Supplier as “approved”, “certified” or otherwise, except as specified in
an Order Form. Supplier shall not be responsible for any disclosure, modification or deletion of
Customer Data resulting from any such access by the providers of Third Party Offerings.
3.2 Integration with Third Party Offerings. The Software may contain features designed to
interoperate with Third Party Offerings (e.g., Google, Facebook/Oculus or Twitter applications).
To use such features, Customer may be required to obtain access to such Third Party Offering
from their providers. If the provider of any Third Party Offering ceases to make the Third Party
Offering available for interoperation with the corresponding Software features on reasonable
terms, certain features of the Software may not be available to Customer.
4. DELIVERY; ACCOUNT REGISTRATION.
4.1 Delivery. Supplier will make the Software available for download to Customer from a
secure server. The Software will be deemed accepted upon delivery and may not be rejected
by Customer.
4.2 Account Registration; Login. Customer will be required to register for an account in
order to use the Software which will be subject to the Nanome Terms of Service and Privacy
Policy. Customer will be required to submit certain personal information when registering for an
account. Customer will be required to log into Customer’s account in order to access and use
the Software. Upon such login, Supplier will authenticate Customer’s login information in order
to verify Customer’s access to the Software.
5. CUSTOMER OBLIGATIONS.
5.1 Customer System. Customer is responsible for (a) obtaining, deploying and maintaining
the Customer System, and all computer hardware, software, modems, routers and other
computer and communications equipment necessary for Customer, its Affiliates and their
respective Users to use the Software; and (b) paying all third party fees and access charges
incurred in connection with the foregoing. Except as specifically set forth in this Agreement, an
Order Form or an SOW, Supplier shall not be responsible for supplying any hardware, software
or other equipment to Customer under this Agreement. In the event that Supplier does supply
to Customer any hardware or other equipment, such hardware or equipment may be supplied to
Customer subject to separate terms and conditions provided to Customer by Supplier and the
acquisition of such hardware will be subject to the manufacturer’s standard terms.
6. MAINTENANCE AND SUPPORT SERVICES.
6.1 Maintenance and Support. Subject to the terms and conditions of this Agreement
(including payment of the applicable fees, if any), Supplier will use commercially reasonable
efforts to provide the Support Services to the extent that Customer has purchased the
applicable Support Level (Level 1, 2 or 3) during the Order process or on the applicable Order
Form which are as described at www.nanome.ai/support, as the same may be updated.
Support Services may include Updates generally issued by Supplier to customers during the
applicable License Term. In no event will Support apply with respect to Third Party Offerings.
6.2 Support Term; Termination. Unless otherwise specified in an Order Form, Supplier will
provide Support Services during the License Term starting on the Delivery Date (the “Support
Period”).
6.3 Non-GA Solutions and Evaluation Licenses. Except as expressly set forth in an Order
Form, no Support Services are offered or made in connection with this Agreement for a Non-GA
Solutions or Evaluation Licenses (other than Level One support for Evaluation Licenses) and
Supplier will not be obligated in any way to correct any errors or deficiencies in the Software or
to provide Updates or new builds.
7. ANCILLARY SERVICES.
7.1 Supplier shall use commercially reasonable efforts to timely perform the Ancillary
Services as set forth in applicable mutually executed SOWs. Each SOW will include, at a
minimum: (a) a description of the scope of Ancillary Services, (b) any work product or other
deliverables to be provided to Customer (each a “Deliverable”), (c) the schedule for the
provision of Ancillary Services, and (d) the applicable fees and payment terms for such Ancillary
Services. All SOWs shall be deemed part of and subject to this Agreement. If there is any
inconsistency between an SOW and this Agreement, the SOW shall control. If either Customer
or Supplier requests a change to the scope of Ancillary Services described in a SOW, the party
seeking the change shall propose such change by written notice. Promptly following the other
party’s receipt of the written notice, the parties shall discuss and agree upon the proposed
changes. Supplier will prepare a change order document describing the agreed changes to the
SOW and any applicable change in fees and expenses (a “Change Order”). Change Orders are
not binding unless and until executed by both parties. Executed Change Orders shall be
deemed part of, and subject to, this Agreement. Supplier and Customer shall cooperate to
enable Supplier to perform the Ancillary Services according to the dates of performance and
delivery terms set forth in each SOW. In addition, Customer shall perform any Customer
obligations specified in each SOW. In the event the Ancillary Services are not performed in
accordance with the terms of the applicable SOW, Supplier shall notify Customer in writing no
later than thirty (30) calendar days after performance of the affected Ancillary Services by
Supplier, Customer’s notice shall specify the basis for non-compliance with the SOW and if
Supplier agrees with the basis for non-compliance, then at Supplier’s sole option, Supplier shall
re-perform the Ancillary Services at no additional charge to Customer or refund to Customer the
applicable fees for the affected Deliverable or Ancillary Service. THE FOREGOING
CONSTITUTES CUSTOMER’S SOLE AND EXCLUSIVE REMEDY AND SUPPLIER’S SOLE
AND EXCLUSIVE LIABILITY WITH RESPECT TO PERFORMANCE OR
NON-PERFORMANCE OF THE ANCILLARY SERVICES.
8. FEES AND PAYMENT.
8.1 Fees. Customer agrees to pay all fees specified in all Order Forms and SOWs using one
of the payment methods Supplier supports. Except as otherwise specified in this Agreement or
in an Order Form, (a) fees are quoted and payable in United States dollars and certain
cryptocurrencies (such as Bitcoin, Ethereum and Matryx) at their then current market rate, as
determined by Supplier, (b) fees are based on licenses purchased for the number of Users
specified in the Order Form, (c) payment obligations are non-cancelable and fees paid are
non-refundable and (d) are payable in advance. User license fees are based on the License
Term specified in the Order Form beginning on the Activation Date; therefore, fees for licenses
for additional Users or Plugins added in the middle of a License Term will be charged for a
prorated License Term. All amounts payable under this Agreement will be made without setoff
or counterclaim, and without any deduction or withholding.
8.2 Invoices and Payment. All fees for Software and applicable Support Services will be
invoiced in advance and in accordance with the applicable Order Form. Fees for Ancillary
Services will be invoiced as set forth in an applicable SOW or Order Form. Except as otherwise
set forth in the applicable Order Form or SOW, Customer agrees to pay all invoiced amounts
within thirty (30) calendar days of the invoice date. Customer is responsible for providing
complete and accurate billing and contact information to Supplier and notifying Supplier of any
changes to such information.
8.3 Overdue Charges. If Supplier does not receive fees by the due date, then at Supplier’s
discretion, (a) such charges may accrue late interest at the rate of One Percent (1%) of the
outstanding balance per month, or the maximum rate permitted by law, whichever is lower, from
the date such payment was due until the date paid; and (b) Supplier may condition future
purchases of Software and Support Services on payment terms shorter than those specified in
Section 8.2 or require prepayment.
8.4 Payment Disputes. Supplier agrees that it will not exercise its rights under Section 8.3 if
the applicable charges are under reasonable and good-faith dispute and Customer is
cooperating diligently to resolve the dispute.
8.5 Taxes. “Taxes” means all taxes, levies, imposts, duties, fines or similar governmental
assessments imposed by any jurisdiction, country or any subdivision or authority thereof
including, but not limited to federal, state or local sales, use, property, excise, service,
transaction, privilege, occupation, gross receipts or similar taxes, in any way connected with this
Agreement or any instrument, order form or agreement required hereunder, and all interest,
penalties or similar liabilities with respect thereto, except such taxes imposed on or measured
by a party’s net income. Notwithstanding the foregoing, Taxes shall not include payroll taxes
attributable to the compensation paid to workers or employees and each party shall be
responsible for its own federal and state payroll tax collection, remittance, reporting and filing
obligations. Fees and charges imposed under this Agreement or under any order form or
similar document ancillary to or referenced by this Agreement shall not include Taxes except as
otherwise provided herein. Customer shall be responsible for all of such Taxes. If, however,
Supplier has the legal obligation to pay Taxes and is required or permitted to collect such Taxes
for which Customer is responsible under this section, Customer shall promptly pay the Taxes
invoiced by Supplier unless Customer has furnished Supplier with valid tax exemption
documentation regarding such Taxes at the execution of this Agreement or at the execution of
any subsequent instrument, order form or agreement ancillary to or referenced by this
Agreement. Customer shall comply with all applicable tax laws and regulations. Customer
hereby agrees to indemnify Supplier for any Taxes and related costs paid or payable by
Supplier attributable to Taxes that would have been Customer’s responsibility under this Section
8.5 if invoiced to Customer. Customer shall promptly pay or reimburse Supplier for all costs and
damages related to any liability incurred by Supplier as a result of Customer’s non-compliance
or delay with its responsibilities herein. Customer’s obligation under this Section 8.5 shall
survive the termination or expiration of this Agreement.
9. REPRESENTATIONS AND WARRANTIES; DISCLAIMER.
9.1 Mutual Representations and Warranties. Each party represents, warrants and
covenants that: (a) it has the full power and authority to enter into this Agreement and to
perform its obligations hereunder, without the need for any consents, approvals or immunities
not yet obtained; and (b) its acceptance of and performance under this Agreement shall not
breach any oral or written agreement with any third party or any obligation owed by it to any
third party to keep any information or materials in confidence or in trust.
9.2 Non-Generally Available Solutions. From time to time Supplier may, in its sole
discretion, invite Customer to try Non-GA Solutions. Customer may accept or decline any such
trial in its sole discretion. Any Non-GA Solutions will be clearly designated as Beta, pilot, limited
release, developer preview, non-production or by a description of similar import. Non-GA
Solutions are provided for evaluation purposes and not for production use, are not supported,
will likely contain bugs or errors (but shall not knowingly contain any undisclosed Malicious
Code), and may be subject to additional terms that shall be provided by Supplier to Customer
prior to or concurrent with Supplier’s invitation to the applicable Non-GA Solution. Non-GA
Solutions are not considered “Software” hereunder. Supplier has the right to discontinue
Non-GA Solutions at any time in its sole discretion and may never make them generally
available.
9.3 Software Warranty. Unless otherwise set forth in the applicable Order Form, Supplier
warrants that during the period of six (6) months after the Delivery Date (the “Warranty Period”)
the Software will function substantially in conformance with the Documentation. If Customer
becomes aware of the Software not functioning in substantial conformance with the
Documentation (a “Defect”), Customer must provide Supplier with written notice that includes a
reasonably detailed explanation of the Defect within the Warranty Period. If Supplier is able to
reproduce the Defect in Supplier’s own operating environment, Supplier will use commercially
reasonable efforts to promptly correct the Defect or provide a replacement software product to
Customer with substantially similar functionality, or at Supplier’s option, terminate the License
Term for the defective Software and refund to Customer the fees paid for that defective
Software (as well as any fees paid for any Support Services not received). THE FOREGOING
SETS FORTH SUPPLIER’S SOLE AND EXCLUSIVE LIABILITY AND CUSTOMER’S SOLE
AND EXCLUSIVE REMEDY FOR ANY DEFECTIVE SOFTWARE.
9.4 Disclaimer. EXCEPT FOR THE WARRANTIES SET FORTH IN SECTIONS 7.1 AND 9,
THE SOFTWARE, SUPPORT SERVICES, ANCILLARY SERVICES, THIRD-PARTY
OFFERINGS AND ANY NON-GA SOLUTIONS ARE PROVIDED ON AN AS-IS BASIS AND
CUSTOMER’S USE THEREOF IS AT ITS OWN RISK. SUPPLIER DOES NOT MAKE, AND
HEREBY DISCLAIMS, ANY AND ALL OTHER EXPRESS, STATUTORY AND IMPLIED
REPRESENTATIONS AND WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,
NONINFRINGEMENT AND TITLE, QUALITY, SUITABILITY, OPERABILITY, CONDITION,
SYSTEM INTEGRATION, NON-INTERFERENCE, WORKMANSHIP, TRUTH, ACCURACY (OF
DATA OR ANY OTHER INFORMATION OR CONTENT), ABSENCE OF DEFECTS,
WHETHER LATENT OR PATENT, AND ANY WARRANTIES ARISING FROM A COURSE OF
DEALING, USAGE, OR TRADE PRACTICE. THE EXPRESS WARRANTIES MADE BY
SUPPLIER IN SECTIONS 7.1 AND 9 ARE FOR THE BENEFIT OF THE CUSTOMER ONLY
AND NOT FOR THE BENEFIT OF ANY THIRD PARTY. ANY SOFTWARE PROVIDED BY
SUPPLIER PURSUANT TO THIS AGREEMENT IS LICENSED AND NOT SOLD. NO
WARRANTIES OF ANY KIND WHATSOEVER ARE MADE FOR CUSTOMER’S BENEFIT
DURING THE LICENSE TERM OF ANY FREE LICENSE, EVALUATION LICENSE OR BETA
LICENSE.
NO AGENT OF SUPPLIER IS AUTHORIZED TO ALTER OR EXPAND THE WARRANTIES OF
SUPPLIER AS SET FORTH HEREIN. SUPPLIER DOES NOT WARRANT THAT: (A) THE USE
OF THE SOFTWARE OR NON-GA SOLUTION WILL BE SECURE, TIMELY,
UNINTERRUPTED OR ERROR-FREE OR OPERATE IN COMBINATION WITH ANY OTHER
HARDWARE, SOFTWARE, SYSTEM OR DATA; (B) THE SOFTWARE WILL MEET
CUSTOMER’S REQUIREMENTS OR EXPECTATIONS; (C) THE SOFTWARE AND NON-GA
SOLUTIONS WILL BE ERROR-FREE OR THAT ERRORS OR DEFECTS IN THE SOFTWARE
AND NON-GA SOLUTIONS WILL BE CORRECTED; OR (D) THE SERVER(S) THAT MAKE
ANY COMPONENTS OF THE SOFTWARE AND NON-GA SOLUTION AVAILABLE ARE FREE
OF VIRUSES OR OTHER HARMFUL COMPONENTS. THE SOFTWARE AND NON-GA
SOLUTION MAY BE SUBJECT TO LIMITATIONS, DELAYS, AND OTHER PROBLEMS
INHERENT IN THE USE OF THE INTERNET AND ELECTRONIC COMMUNICATIONS.
SUPPLIER IS NOT RESPONSIBLE FOR ANY DELAYS, DELIVERY FAILURES, OR OTHER
DAMAGES RESULTING FROM SUCH PROBLEMS.
10. INDEMNIFICATION.
10.1 Supplier Indemnity.
(a) General. During the License Term (other than with respect to a Fee License, an
Evaluation License or a Beta License), Supplier, at its expense, shall defend Customer and its
Affiliates and their respective officers, directors and employees (the “Customer Indemnified
Parties”) from and against all actions, proceedings, claims and demands by a third party (a
“Third-Party Claim”) alleging that the Software infringes any copyright or misappropriates any
trade secret and shall pay all damages, costs and expenses, including attorneys’ fees and costs
(whether by settlement or award of by a final judicial judgment) paid to the Third Party bringing
any such Third-Party Claim. Supplier’s obligations under this Section are conditioned upon (i)
Supplier being promptly notified in writing of any claim under this Section, (ii) Supplier having
the sole and exclusive right to control the defense and settlement of the claim, and (iii)
Customer providing all reasonable assistance (at Supplier’s expense and reasonable request) in
the defense of such claim. In no event shall Customer settle any claim without Supplier’s prior
written approval. Customer may, at its own expense, engage separate counsel to advise
Customer regarding a Claim and to participate in the defense of the claim, subject to Supplier’s
right to control the defense and settlement.
(b) Mitigation. If any claim which Supplier is obligated to defend has occurred, or in
Supplier’s determination is likely to occur, Supplier may, in its sole discretion and at its option
and expense (a) obtain for Customer the right to use the Software, (b) substitute a functionality
equivalent, non-infringing replacement for such the Software, (c) modify the Software to make it
non-infringing and functionally equivalent, or (d) terminate this Agreement and refund to
Customer any prepaid amounts attributable the period of time between the date Customer was
unable to use the Software due to such claim and the remaining days in the then-current
License Term.
(c) Exclusions. Notwithstanding anything to the contrary in this Agreement, the foregoing
obligations shall not apply with respect to a claim of infringement if such claim arises out of (i)
use of the Software in combination with any software, hardware, network or system not supplied
by Supplier where the alleged infringement relates to such combination, (ii) any modification or
alteration of the Software other than by Supplier, (iii) Customer’s continued use of the Software
after Supplier notifies Customer to discontinue use because of an infringement claim, (iv) use of
Open Source Software; (v) Customer’s violation of applicable law; (vi) Third Party Offerings; and
(vii) Customer System.
(d) Sole Remedy. THE FOREGOING STATES THE ENTIRE LIABILITY OF SUPPLIER
WITH RESPECT TO THE INFRINGEMENT OF ANY INTELLECTUAL PROPERTY OR
PROPRIETARY RIGHTS BY THE SOFTWARE OR OTHERWISE, AND CUSTOMER HEREBY
EXPRESSLY WAIVES ANY OTHER LIABILITIES OR OBLIGATIONS OF SUPPLIER WITH
RESPECT THERETO. NO INDEMNITIES OF ANY KIND WHATSOEVER ARE MADE FOR
CUSTOMER’S BENEFIT DURING THE LICENSE TERM OF ANY FREE LICENSE,
EVALUATION LICENSE OR BETA LICENSE.
10.2 Customer Indemnity. Customer shall defend Supplier and its Affiliates, licensors and
their respective officers, directors and employees (“Supplier Indemnified Parties”) from and
against any and all Third-Party Claims which arise out of or relate to: (a) Customer’s use or
alleged use of the Software other than as permitted under this Agreement, (b) Customer or its
Affiliates’ Users use of the Software in violation of any applicable law or regulation, or the
Intellectual Property Rights or other rights of any third party, or (c) arising from the occurrence of
any of the exclusions set forth in Section 10.1(c) (Exclusions). Customer shall pay all damages,
costs and expenses, including attorneys’ fees and costs (whether by settlement or award of by a
final judicial judgment) paid to the Third Party bringing any such Third-Party Claim. Customer’s
obligations under this Section are conditioned upon (x) Customer being promptly notified in
writing of any claim under this Section, (y) Customer having the sole and exclusive right to
control the defense and settlement of the claim, and (z) Supplier providing all reasonable
assistance (at Customer’s expense and reasonable request) in the defense of such claim. In no
event shall Supplier settle any claim without Customer’s prior written approval. Supplier may, at
its own expense, engage separate counsel to advise Supplier regarding a Third-Party Claim and
to participate in the defense of the claim, subject to Customer’s right to control the defense and
settlement.
11. CONFIDENTIALITY.
11.1 Confidential Information. “Confidential Information” means any and all non-public
technical and non-technical information disclosed by one party (the “Disclosing Party”) to the
other party (the “Receiving Party”) in any form or medium, whether oral, written, graphical or
electronic, pursuant to this Agreement, that is marked confidential and proprietary, or that the
Disclosing Party identifies as confidential and proprietary, or that by the nature of the
circumstances surrounding the disclosure or receipt ought to be treated as confidential and
proprietary information, including but not limited to: (a) techniques, sketches, drawings, models,
inventions (whether or not patented or patentable), know-how, processes, apparatus, formulae,
equipment, algorithms, software programs, software source documents, APIs, and other
creative works (whether or not copyrighted or copyrightable); (b) information concerning
research, experimental work, development, design details and specifications, engineering,
financial information, procurement requirements, purchasing, manufacturing, customer lists,
business forecasts, sales and merchandising and marketing plans and information; (c)
proprietary or confidential information of any third party who may disclose such information to
Disclosing Party or Receiving Party in the course of Disclosing Party’s business; and (d) the
terms of this Agreement and any Order Form or SOW. Confidential Information of Supplier shall
include the Software, the documentation, the pricing, and information regarding the
characteristics, features or performance of Beta Licenses and Non-GA Solutions. Confidential
Information also includes all summaries and abstracts of Confidential Information.
11.2 Non-Disclosure. Each party acknowledges that in the course of the performance of this
Agreement, it may obtain the Confidential Information of the other party. The Receiving Party
shall, at all times, both during the term of this Agreement and thereafter, use reasonable efforts
to keep in confidence and trust all of the Disclosing Party’s Confidential Information received by
it. The Receiving Party shall not use the Confidential Information of the Disclosing Party other
than as necessary to fulfill the Receiving Party’s obligations or to exercise the Receiving Party’s
rights under this Agreement. Each party agrees to secure and protect the other party’s
Confidential Information with the same degree of care and in a manner consistent with the
maintenance of such party’s own Confidential Information (but in no event less than reasonable
care), and to take appropriate action by instruction or agreement with its employees, Affiliates or
other agents who are permitted access to the other party’s Confidential Information to satisfy its
obligations under this Section. Customer acknowledges that Supplier will use reasonable efforts
to ensure the confidentiality and access security of information made available by Customer in a
private Virtual Room but that confidentiality cannot be absolutely guaranteed. The Receiving
Party shall not disclose Confidential Information of the Disclosing Party to any person or entity
other than its officers, employees, affiliates and agents who need access to such Confidential
Information in order to effect the intent of this Agreement and who are subject to confidentiality
obligations at least as stringent as the obligations set forth in this Agreement.
11.3 Exceptions to Confidential Information. The obligations set forth in Section 11.2
(Non-Disclosure) shall not apply to the extent that Confidential Information includes information
which: (a) was known by the Receiving Party prior to receipt from the Disclosing Party either
itself or through receipt directly or indirectly from a source other than one having an obligation of
confidentiality to the Disclosing Party; (b) was developed by the Receiving Party without use of
the Disclosing Party’s Confidential Information; or (c) becomes publicly known or otherwise
ceases to be secret or confidential, except as a result of a breach of this Agreement or any
obligation of confidentiality by the Receiving Party. Nothing in this Agreement shall prevent the
Receiving Party from disclosing Confidential Information to the extent the Receiving Party is
legally compelled to do so by any governmental investigative or judicial agency pursuant to
proceedings over which such agency has jurisdiction; provided, however, that prior to any such
disclosure, the Receiving Party shall (x) assert the confidential nature of the Confidential
Information to the agency; (y) immediately notify the Disclosing Party in writing of the agency’s
order or request to disclose; and (z) cooperate fully with the Disclosing Party in protecting
against any such disclosure and in obtaining a protective order narrowing the scope of the
compelled disclosure and protecting its confidentiality.
11.4 Injunctive Relief. The Parties agree that any unauthorized disclosure of Confidential
Information may cause immediate and irreparable injury to the Disclosing Party and that, in the
event of such breach, the Receiving Party will be entitled, in addition to any other available
remedies, to seek immediate injunctive and other equitable relief, without bond and without the
necessity of showing actual monetary damages.
12. PROPRIETARY RIGHTS.
12.1 Software. As between Supplier and Customer, all right, title and interest in the Software
and any other Plugins, materials, software, virtual items and other content furnished or made
available hereunder or via the Marketplace or Virtual Rooms, and all modifications and
enhancements thereof, and all suggestions, ideas and feedback proposed by Customer
regarding any such items, including all copyright rights, patent rights and other Intellectual
Property Rights in each of the foregoing, belong to and are retained solely by Supplier or
Supplier’s licensors and providers, as applicable. If the License Term is for an Evaluation
License or any Non-GA Solutions (including Beta Licenses), Customer shall periodically (and, in
any case, not less than once every thirty (30) days or more frequently as provided in the Order
Form) provide Supplier with written feedback regarding Customer’s use of the Software, the
functionality of the Software, any bugs, errors or deficiencies that Customer encounters
regarding the operation and functionality of the Software and any suggestions that Customer
may have regarding improvement of such operation and functionality (“Feedback”).
Additionally, Customer shall promptly respond to any questions that Supplier may have
regarding such Feedback or to any other questions Supplier may have regarding Customer’s
use of the Software. Customer hereby does and will irrevocably assign to Supplier all Feedback
and all Intellectual Property Rights in the Feedback.
12.2 Customer Data. As between Supplier and Customer, all right, title and interest in (a) the
Customer Data, (b) other information input into the Software by Customer (collectively, “Other
Information”) and (c) all Intellectual Property Rights in each of the foregoing, belong to and are
retained solely by Customer. Customer hereby grants to Supplier a limited, non-exclusive,
royalty-free, worldwide license to use the Customer Data and perform all acts with respect to the
Customer Data as may be necessary for Supplier to provide the Software and Services and any
services available to Customer via the Marketplace or Virtual Rooms, and a non-exclusive,
perpetual, irrevocable, worldwide, royalty-free, fully paid license to use, reproduce, modify and
distribute the Other Information as a part of the Aggregated Statistics (as defined in Section
12.3 below). As between Supplier and Customer, Customer is solely responsible for the
accuracy, quality, integrity, legality, reliability, and appropriateness of all Customer Data.
12.3 Aggregated Statistics. Notwithstanding anything else in these Terms or otherwise,
Supplier may monitor Customer’s use of the Software and use data and information related to
such use, Customer Data, and Other Information in an aggregate and anonymous manner,
including to compile statistical and performance information related to the provision and
operation of the Software (“Aggregated Statistics”). As between Supplier and Customer, all
right, title and interest in the Aggregated Statistics and all Intellectual Property Rights therein,
belong to and are retained solely by Supplier. Customer acknowledges that Supplier will be
compiling Aggregated Statistics based on Customer Data, Other Information, and information
input by other customers into the Software and Customer agrees that Supplier may (a) make
such Aggregated Statistics publicly available, and (b) use such information to the extent and in
the manner required by applicable law or regulation and for purposes of data gathering,
analysis, and service enhancement, provided that such data and information does not identify
Customer or its Confidential Information.
12.4 Supplier Developments. All inventions, works of authorship and developments
conceived, created, written, or generated by or on behalf of Supplier, whether solely or jointly,
including without limitation, in connection with Supplier’s performance of the Ancillary Services
hereunder, including (unless otherwise expressly set forth in an applicable SOW) all
Deliverables (“Supplier Developments”) and all Intellectual Property Rights therein, shall be the
sole and exclusive property of Supplier. Customer agrees that, except for Customer
Confidential Information, to the extent that the ownership of any contribution by Customer or its
employees to the creation of the Supplier Developments is not, by operation of law or otherwise,
vested in Supplier, Customer hereby assigns and agrees to assign to Supplier all right, title and
interest in and to such Supplier Developments, including without limitation all the Intellectual
Property Rights therein, without the necessity of any further consideration.
12.5 Further Assurances. To the extent any of the rights, title and interest in and to Feedback
or Supplier Developments or Intellectual Property Rights therein cannot be assigned by
Customer to Supplier, Customer hereby grants to Supplier an exclusive, royalty-free,
transferable, irrevocable, worldwide, fully paid-up license (with rights to sublicense through
multiple tiers of sublicensees) to fully use, practice and exploit those non-assignable rights, title
and interest. If the foregoing assignment and license are not enforceable, Customer agrees to
waive and never assert against Supplier those non-assignable and non-licensable rights, title
and interest. Customer agrees to execute any documents or take any actions as may
reasonably be necessary, or as Supplier may reasonably request, to perfect ownership of the
Feedback and Supplier Developments. If Customer is unable or unwilling to execute any such
document or take any such action, Supplier may execute such document and take such action
on Customer’s behalf as Customer’s agent and attorney-in-fact. The foregoing appointment is
deemed a power coupled with an interest and is irrevocable.
12.6 License to Deliverables. Subject to Customer’s compliance with this Agreement,
Supplier hereby grants Customer a limited, non-exclusive, non-transferable license during the
License Term to use the Deliverables solely in connection with Customer’s authorized use of the
Software. Notwithstanding any other provision of this Agreement: (i) nothing herein shall be
construed to assign or transfer any Intellectual Property Rights in the proprietary tools, source
code samples, templates, libraries, know-how, techniques and expertise (“Tools”) used by
Supplier to develop the Deliverables, and to the extent such Tools are delivered with or as part
of the Deliverables, they are licensed, not assigned, to Customer, on the same terms as the
Deliverables; and (ii) the term “Deliverables” shall not include the Tools.
13. LIMITATION OF LIABILITY.
13.1 No Consequential Damages. NEITHER SUPPLIER NOR SUPPLIER’S LICENSORS
OR SUPPLIERS SHALL BE LIABLE FOR ANY INDIRECT, INCIDENTAL, SPECIAL,
CONSEQUENTIAL OR PUNITIVE DAMAGES, OR ANY DAMAGES FOR LOST DATA,
BUSINESS INTERRUPTION, LOST PROFITS, LOST REVENUE OR LOST BUSINESS,
ARISING OUT OF OR IN CONNECTION WITH THIS AGREEMENT, EVEN IF SUPPLIER OR
SUPPLIER’S LICENSORS OR SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES, INCLUDING WITHOUT LIMITATION, ANY SUCH DAMAGES ARISING
OUT OF THE LICENSING, PROVISION OR USE OF THE SOFTWARE, ANCILLARY
SERVICES, SUPPORT SERVICES OR THE RESULTS THEREOF. SUPPLIER WILL NOT BE
LIABLE FOR THE COST OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES.
13.2 Limits on Liability. NEITHER SUPPLIER NOR ITS LICENSORS OR SUPPLIERS
SHALL BE LIABLE FOR CUMULATIVE, AGGREGATE DAMAGES GREATER THAN AN
AMOUNT EQUAL TO THE LESSER OF (a) THE AMOUNTS PAID BY CUSTOMER TO
SUPPLIER UNDER THIS AGREEMENT DURING THE PERIOD OF SIX (6) MONTHS
PRECEDING THE DATE ON WHICH THE CLAIM FIRST ACCRUED, AND (b) THE AMOUNT
OF FEES PAID BY CUSTOMER IN A SINGLE LICENSE TERM.
13.3 Essential Purpose. CUSTOMER ACKNOWLEDGES THAT THE TERMS IN THIS
SECTION 13 (LIMITATION OF LIABILITY) SHALL APPLY TO THE MAXIMUM EXTENT
PERMITTED BY APPLICABLE LAW AND SHALL APPLY EVEN IF AN EXCLUSIVE OR
LIMITED REMEDY STATED HEREIN FAILS OF ITS ESSENTIAL PURPOSE AND WITHOUT
REGARD TO WHETHER SUCH CLAIM IS BASED IN CONTRACT, TORT (INCLUDING
NEGLIGENCE), INDEMNITY, PRODUCT LIABILITY OR OTHERWISE.
14. TERM AND TERMINATION; AUTOMATIC RENEWAL.
14.1 Term. The term of this Agreement commences on the Effective Date and continues until
the expiration or termination of all License Term(s), unless earlier terminated as provided in this
Agreement. Except as otherwise specified in the applicable Order Form, License Terms (other
than for Evaluation Licenses and Non-GA Solutions, including Beta Licenses) for all Users shall
automatically renew for additional periods equal to the expiring License Term unless one party
gives the other written notice of non-renewal prior to the desired date of expiration. The per-unit
pricing during any automatic renewal term shall be the same as that during the immediately
prior term unless Supplier has given Customer written notice (either through an e-mail or notice
on the website) of a pricing increase at least thirty (30) days before the end of such prior term, in
which case the pricing increase shall be effective upon renewal and thereafter; provided
however that no such pricing increase shall occur until after expiration of the then current
License Term. Evaluation Licenses and Non-GA Solutions, including Beta Licenses, will
terminate at the end of their respective License Term unless the parties enter into an Order
Form for a new License Term.
14.2 Termination for Cause. A party may terminate this Agreement and any SOW (and all
License Term(s)) upon written notice to the other party in the event the other party (a) files a
petition for bankruptcy or has a petition for bankruptcy filed against it that is not dismissed within
sixty (60) days after filing or admits its inability to pay its debts as they mature, makes an
assignment for the benefit of its creditors or ceases to function as a going concern or to conduct
its operations in the normal course of business and such termination shall occur immediately
upon notice; or (b) commits a material breach of any provision of this Agreement and does not
remedy such breach within thirty (30) days (or ten (10) days after A failure to pay any fees
hereunder) after receipt of notice from the other party or such other period as the parties may
agree. Upon any termination for cause by Customer, Supplier shall refund Customer the pro
rated amount of any prepaid fees for the remainder of the terminated License Terms after the
effective termination date. Upon any termination for cause by Supplier, Customer shall pay any
unpaid fees covering the remainder of the term of all Order Forms after the effective date of
termination. In no event shall any termination relieve Customer of the obligation to pay any fees
payable to Supplier for the period prior to the effective date of termination.
14.3 Termination for Convenience. Either party shall have the right to terminate any License
Term for convenience on at least ten (10) days prior written notice to the other party. Pre-paid
fees for the then-current License Term are non-refundable unless Supplier exercises such
termination right for convenience in which case Supplier shall refund to Customer the pro rated
amount of any pre-paid fees for the terminated License Term.
14.4 Effects of Termination. Upon expiration or termination of this Agreement, (a) Customer’s
use of and access to the Software and Supplier's performance of all Support Services and
Ancillary Services shall cease; (b) all Order Forms and SOWs shall terminate; (c) all fees and
other amounts owed to Supplier shall be immediately due and payable by Customer, including
without limitation, all fees incurred under any outstanding SOW up through the date of
termination for any Ancillary Services completed and a pro-rated portion of the fees incurred for
any partially completed Ancillary Services; and (d) in the case of a termination for convenience
by Supplier or termination by Customer due to Supplier’s breach, Supplier will refund to
Customer the amount of any pre-paid fees for the terminated License Term and pre-paid fees
for Support Services for the terminated portion of the Support Period. Within ten (10) days of
the effective date of termination each Receiving Party shall: (a) return to the Disclosing Party, or
at the Disclosing Party’s option, the Receiving Party shall destroy, all items of Confidential
Information then in the Receiving Party’s possession or control, including any copies, extracts or
portions thereof, and (b) upon request shall certify in writing to Disclosing Party that it has
complied with the foregoing.
14.5 Survival. This Section and Sections 1, 2.4, 8, 9, 10, 11, 12, 13, 14.4, and 15 shall
survive any termination or expiration of this Agreement.
15. MISCELLANEOUS.
15.1 Notices. Supplier may give notice to Customer by means electronic mail to Customer’s
e-mail address on record with Supplier, or by written communication sent by first class postage
prepaid mail or nationally recognized overnight delivery service to Customer’s address on
record with Supplier. Customer may give notice to Supplier by written communication sent by
e-mail to support@nanome.ai for by first class postage prepaid mail or nationally recognized
overnight delivery service addressed to Supplier, 7770 Regents Rd Suite #113, Box #102, San
Diego, CA 92122, Attention: Nanome Inc. Notice shall be deemed to have been given upon
receipt or, if earlier, two (2) business days after mailing, as applicable. All communications and
notices to be made or given pursuant to this Agreement shall be in the English language.
15.2 Governing Law, Dispute Resolution. This Agreement and the rights and obligations of
the parties to and under this agreement shall be governed by and construed under the laws of
the United States and the State of California as applied to agreements entered into and to be
performed in such State without giving effect to conflicts of laws rules or principles. The parties
agree that the United Nations Convention on Contracts for the International Sale of Goods is
specifically excluded from application to this Agreement. The parties further agree to waive and
opt-out of any application of the Uniform Computer Information Transactions Act (UCITA), or
any version thereof, adopted by any state of the United States in any form. Any disputes arising
out of or in connection with this Agreement, including but not limited to any question regarding
its existence, interpretation, validity, performance or termination, or any dispute between the
parties arising from the parties’ relationship created by this Agreement, shall be heard in the
state and federal courts located in San Diego County, State of California and the parties hereby
consent to exclusive jurisdiction and venue in such courts.
15.3 Publicity. Supplier has the right to reference and use Customer’s name and trademarks
and disclose the Software provided hereunder in each case in Supplier business development
and marketing efforts, including without limitation Supplier’s web site and marketing materials.
In the event that Customer publishes any Customer Data resulting from Customer’s use of the
Software, Customer shall include a reference providing that the Software was used as a tool.
15.4 U.S. Government Customers. If Customer is a Federal Government entity, Supplier
provides the Software, including related software and technology, for ultimate Federal
Government end use solely in accordance with the following: Government technical data rights
include only those rights customarily provided to the public with a commercial item or process
and Government software rights related to the Software include only those rights customarily
provided to the public, as defined in this Agreement. The technical data rights and customary
commercial software license is provided in accordance with FAR 12.211 (Technical Data) and
FAR 12.212 (Software) and, for Department of Defense transactions, DFAR 252.227-7015
(Technical Data – Commercial Items) and DFAR 227.7202-3 (Rights in Commercial Computer
Software or Computer Software Documentation). If greater rights are needed, a mutually
acceptable written addendum specifically conveying such rights must be included in this
Agreement.
15.5 Export. The Software utilizes software and technology that may be subject to United
States and foreign export controls. Customer acknowledges and agrees that the Services shall
not be used, and none of the underlying information, software, or technology may be transferred
or otherwise exported or re-exported to countries as to which the United States maintains an
embargo (collectively, “Embargoed Countries”), or to or by a national or resident thereof, or any
person or entity on the U.S. Department of Treasury’s List of Specially Designated Nationals or
the U.S. Department of Commerce’s Table of Denial Orders (collectively, “Designated
Nationals”). The lists of Embargoed Countries and Designated Nationals are subject to change
without notice. By using the Software, Customer represents and warrants that it is not located
in, under the control of, or a national or resident of an Embargoed Country or Designated
National, or that has been designated by the U.S. Government as a “terrorist supporting”
country. The Software may use encryption technology that is subject to licensing requirements
under the U.S. Export Administration Regulations, 15 C.F.R. Parts 730-774 and Council
Regulation (EC) No. 1334/2000. Customer agrees to comply strictly with all applicable export
laws and assume sole responsibility for obtaining licenses to export or re-export as may be
required. Supplier and its licensors make no representation that the Software is appropriate or
available for use in other locations. Any diversion of the Customer Data contrary to law is
prohibited. None of the Customer Data, nor any information acquired through the use of the
Software, is or will be used for nuclear activities, chemical or biological weapons, or missile
projects.
15.7 General. Customer shall not assign its rights hereunder, or delegate the performance of
any of its duties or obligations hereunder, whether by merger, acquisition, sale of assets,
operation of law, or otherwise, without the prior written consent of Supplier. Any purported
assignment in violation of the preceding sentence is null and void. Subject to the foregoing, this
Agreement shall be binding upon, and inure to the benefit of, the successors and assigns of the
parties thereto. With the exception of Affiliates of Customer who have executed Order Forms
under this Agreement, there are no third-party beneficiaries to this Agreement. Except as
otherwise specified in this Agreement, this Agreement may be amended or supplemented only
by a writing that refers explicitly to this Agreement and that is signed on behalf of both parties.
No waiver will be implied from conduct or failure to enforce rights. No waiver will be effective
unless in a writing signed on behalf of the party against whom the waiver is asserted. If any of
this Agreement is found invalid or unenforceable that term will be enforced to the maximum
extent permitted by law and the remainder of the Terms will remain in full force. The parties are
independent contractors and nothing contained herein shall be construed as creating an
agency, partnership, or other form of joint enterprise between the parties. This Agreement,
including all applicable Order Forms, SOWs and separate or additional terms referred to herein,
constitute the entire agreement between the parties relating to this subject matter and
supersedes all prior or simultaneous understandings, representations, discussions,
negotiations, and agreements, whether written or oral. Except for payment obligations
hereunder, neither party shall be liable to the other party or any third party for failure or delay in
performing its obligations under this Agreement when such failure or delay is due to any cause
beyond the control of the party concerned, including, without limitation, acts of God,
governmental orders or restrictions, fire, or flood, provided that upon cessation of such events
such party shall thereupon promptly perform or complete the performance of its obligations
hereunder.
Nanome End User License Agreement ©2018 Nanome, Inc. All rights reserved. Nanome is a
trademark of Nanome, Inc. in the US and other countries.
Nanome, Inc. End User License Agreement, version 2.0 (July 2018)


Paying Is Voluntary at This Selfie-Friendly Store

$
0
0

First there was self-checkout. Then Amazon’s cashier-free Go stores. Now there’s pay when you feel like it — we trust you.

At Drug Store, a narrow, black-and-white-tiled store that opened Wednesday in Manhattan’s Tribeca neighborhood, there is no cashier or checkout counter. Anyone can walk in, grab a $10.83 activated-charcoal drink and leave.

But the beverages, typically sold online by the case by Dirty Lemon, a start-up that runs the store, are not free. Dirty Lemon has made a bet that customers will pay the same way they order its pricey lemon-flavored drinks for home delivery: by sending the company a text message.

In the store, customers are expected to text Dirty Lemon to say they have grabbed something. A representative will then text back with a link to enter their credit card information, adding, “Let us know if you need anything else.”

Zak Normandin, the company’s chief executive, said he was not worried that Drug Store’s honor system would encourage theft. “I do think a majority of people would feel very guilty for continuing to steal,” he said in a recent interview at the store.

When asked how much money Dirty Lemon was willing to lose to theft, Mr. Normandin demurred, noting that the company would write down any losses as sampling costs.

Founded in 2015, Dirty Lemon counts 100,000 customers, around half of whom order at least a case of six beverages each month. Its high prices, text-message ordering and beauty claims are helping it get attention in a business littered with new health-focused drink brands. Dirty Lemon’s “sleep tonic” contains magnesium, a “beauty elixir” drink features collagen, and an anti-aging drink contains rose water.

The company is closing a round of venture capital funding from celebrities and investors, including Winklevoss Capital, Betaworks and the investment fund of the YouTube stars Jake Paul and Cameron Dallas.

Mr. Normandin said his conviction in Dirty Lemon’s store was so strong that he had already made plans to open another one in New York and two more in other cities, all featuring a separate V.I.P. lounge with a bar and special events. The company has shifted almost all of its $4 million annual digital advertising budget into its retail stores.

Dirty Lemon is forging ahead into brick-and-mortar stores when many traditional retailers are closing locations and investing in digital marketing and e-commerce. But Mr. Normandin said his customers, who are mainly young women, were tired of digital marketing that constantly pushed them to buy things. Rather, he said, they seek unique in-person experiences.

“They want to actually be kind of immersed in a brand, and take it all in, and maybe take a picture,” he said.

Dirty Lemon’s Drug Store features a large, selfie-friendly mirror that reflects a wall of coolers and stark, black-and-white-striped penny tiles creeping across the high ceiling.

So-called immersive pop-up stores and museums, optimized for social media, have proliferated in recent years. This summer, visitors to Rosé Mansion in New York wandered 14 rooms of highly stylized Instagram-bait, sharing geotagged photos, GIFs, and videos of bubble pits and cava fountains. This month, 29Rooms offers an equally Instagram-able “interactive fun house” in Brooklyn. The Museum of Ice Cream and Candytopia, both of them in New York and San Francisco, are comparably photogenic.

Mr. Normandin said his company’s plans went beyond lemon drinks and selfie mirrors to transforming the beverage industry’s distribution methods. The company aspires to “rebuild the infrastructure that has powered beverages since the 1800s,” he said.

That will take a lot of text messages.

Erin Griffith is on Twitter: @eringriffith.

Interested in All Things Tech? Get the Bits newsletter delivered to your inbox weekly for the latest from Silicon Valley and the technology industry.

A version of this article appears in print on , on Page B4 of the New York edition with the headline: Payment Is on the Honor System at This Selfie-Friendly Manhattan Store. Order Reprints | Today’s Paper | Subscribe

Why Edinburgh's clock is (almost) never on time

$
0
0

Arrive in Edinburgh on any given day and there are certain things you can guarantee. The fairy-tale Gothic of the royal castle, built on an extinct volcanic plug. The medieval riddle of alleys and lanes. The majesty of the churchyards and macabre spires set against a barb of basalt crags, all as if created by a mad god.

Yet there is one other given in the Scottish capital, and it is the hallmark of Princes Street, the city’s main thoroughfare that runs east to west joining Leith to the West End. The time on the turret clock atop The Balmoral Hotel is always wrong. By three minutes, to be exact.

You may also be interested in:
• The clock that changed the meaning of time
• The town that wakes the world
• The isle you can only visit once a year

While the clock tower’s story is legendary in Edinburgh, it remains a riddle for many first-timers. To the untrained eye, the 58m-high landmark is simply part of the grand finale when surveyed from Calton Hill, Edinburgh’s go-to city-centre viewpoint. There it sits to the left of the Dugald Stewart Monument, like a giant exclamation mark above the glazed roof of Waverley Train Station.

Likewise, the sandstone baronial tower looks equally glorious when eyed from the commanding northern ramparts of Edinburgh Castle while peering out over the battlements. It is placed at the city’s very centre of gravity, between the Old Town and the New Town, at the confluence of all business and life. Except, of course, that the dial’s big hand and little hand are out of sync with Greenwich Mean Time.

It is a calculated miscalculation that helps keep the city on time

This bold irregularity is, in fact, a historical quirk first introduced in 1902 when the Edwardian-era building opened as the North British Station Hotel. Then, as now, it overlooked the platforms and signal boxes of Waverley Train Station, and just as porters in red jackets met guests off the train, whisking them from the station booking hall to the interconnected reception desk in the hotel’s basement, the North British Railway Company owners wanted to make sure their passengers – and Edinburgh’s hurrying public – wouldn’t miss their trains.

Given an extra three minutes, they reasoned, these travellers would have more time on the clock to collect their tickets, to reach their corridor carriages and to unload their luggage before the stationmaster’s whistle blew. Still today, it is a calculated miscalculation that helps keep the city on time.

The sky was overcast and the air bitingly cold on the day I visited to learn this history, guided by the hotel’s security manager Iain Davidson. After a quick briefing, I followed his echoing footsteps into the dimly lit brickwork turret, a transition from front of house to backstage. In between the sixth floor’s suites, we entered a door that could well have led to a broom cupboard. Above that, beyond the water storage tanks, a black spiral staircase corkscrewed into the tower’s crown through a series of wooden landings. Each step up was a step back in time.

“Visually, this is one of Edinburgh’s most interesting, if secretive, places,” said Davidson, reaching the top as daylight flooded in to reveal a brickwork gallery embellished with four symmetrical clock faces. Around us, the airy attic featured slit windows that afforded views of central Edinburgh’s commercial hodgepodge, raising us to the level of the castle and the chimneys of the Royal Mile. “Everyone always wonders what it’s like up here when they’re on the street below. Isn’t it marvellous?”

Everyone relies on it being wrong

While exploring the nooks and crannies, Davidson explained that the only major change over the past 116 years is the clock was manually wound until the 1970s, when it was electrified. “It means the tower doesn’t get as many visitors as people might think.”

That the clock is wrong every day of the year is not technically true, either. Its time is stretched to accommodate an annual event. On New Year’s Eve, or Hogmanay as Scots call it, the tower welcomes a special one-off house call, when an engineer is dispatched to remedy the timekeeping error. “Plain and simple, the clock needs to be right for the traditional countdown to the midnight bells,” said Davidson, leading our two-man party back down to the hotel’s grand lobby. “Beyond that, everyone relies on it being wrong.”

While the turret clock has remained dependably inaccurate over the past century, the hotel has understandably moved with the times. Following World War Two and the 1948 nationalisation of Britain’s railways, the golden age of steam was over, and so, too, was the era of the railway-owned hotel. Where once stood 112 hotels on the map in 1913, there are now but a handful left. For its part, the North British Station Hotel severed links with the railway in the early 1980s, before being rebranded as The Balmoral in 1990. Two refurbishments totalling £30m and a change of ownership to the Sir Rocco Forte Group followed, and yet the clock’s time was left unaltered.

To learn more, I contacted Smith of Derby, a fifth-generation family-run clockmaker, which has maintained The Balmoral’s turret clock for almost a century through its Broxburn-based subsidiary James Ritchie & Son.

Among the other world-famous clocks under its guardianship are those aloft on St Paul’s Cathedral and the elegant Victorian dial at St Pancras Station in London; and the 64m tower anchoring the Majlis Oman, the parliament in Muscat. Smith of Derby’s greatest achievement, however, is the world’s largest mechanical clock, a 12.8m-diameter, pendulum-operated timepiece that decorates the Harmony Clock Tower in Ganzhou, China.

“We look after 5,000 different clock towers around the world, and to say The Balmoral’s is peculiar is a massive understatement,” the firm’s Tony Charlesworth told me. “It’s hard to believe, but it’s the only one we’re paid to keep wrong.”

Charlesworth has other stories, too. In 2012, the clock ran 90 minutes late after a power cut caused by tram workers, when Princes Street saw the return of electric tracks. Another episode, two years earlier, saw it inexplicably stop for the first time in 108 years. And for those romantics, a story lingers that the clock runs fast to give departing lovers longer to kiss before saying their goodbyes.

“There’s never been a time when we’ve been asked to make it right,” Charlesworth said, matter-of-factly. “People have smartphones and watches, of course, but you’ll be surprised by how much they rely on public clocks, especially when they’re in a rush. There’s still a need for it, and for the foreseeable future it’ll still be wrong.”

Today, the wrong time is taken for granted in Edinburgh, not because of retrospective sentimentality, but because familiarity breeds affection. Or at least that’s how Charlesworth sees it. “There’d be a public outcry if it was ever on time,” he said. “Remember, this is Scotland. People wouldn’t put up with it.”

There’d be a public outcry if it was ever on time

In this city of meticulous town planning, dependable tourist crowds and annual festivals, that’s something you could set your watch by. Those extra three minutes reveal everything about living here, right now.

Join more than three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter called "If You Only Read 6 Things This Week". A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.

The Transport Layer Security (TLS) Protocol Version 1.3

$
0
0
The Transport Layer Security (TLS) Protocol Version 1.3

The Transport Layer Security (TLS) Protocol Version 1.3
draft-ietf-tls-tls13-latest

This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.

This document updates RFCs 4492, 5705, and 6066 and it obsoletes RFCs 5077, 5246, and 6961. This document also specifies new requirements for TLS 1.2 implementations.

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 27, 2018.

Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.


RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH The source for this draft is maintained in GitHub. Suggested changes should be submitted as pull requests at https://github.com/tlswg/tls13-spec. Instructions are on that page as well. Editorial changes can be managed in GitHub, but any substantive change should be discussed on the TLS mailing list.

The primary goal of TLS is to provide a secure channel between two communicating peers; the only requirement from the underlying transport is a reliable, in-order, data stream. Specifically, the secure channel should provide the following properties:

  • Authentication: The server side of the channel is always authenticated; the client side is optionally authenticated. Authentication can happen via asymmetric cryptography (e.g., RSA [RSA], ECDSA [ECDSA], EdDSA [RFC8032]) or a pre-shared key (PSK).
  • Confidentiality: Data sent over the channel after establishment is only visible to the endpoints. TLS does not hide the length of the data it transmits, though endpoints are able to pad TLS records in order to obscure lengths and improve protection against traffic analysis techniques.
  • Integrity: Data sent over the channel after establishment cannot be modified by attackers.

These properties should be true even in the face of an attacker who has complete control of the network, as described in [RFC3552]. See Appendix E for a more complete statement of the relevant security properties.

TLS consists of two primary components:

  • A handshake protocol (Section 4) that authenticates the communicating parties, negotiates cryptographic modes and parameters, and establishes shared keying material. The handshake protocol is designed to resist tampering; an active attacker should not be able to force the peers to negotiate different parameters than they would if the connection were not under attack.
  • A record protocol (Section 5) that uses the parameters established by the handshake protocol to protect traffic between the communicating peers. The record protocol divides traffic up into a series of records, each of which is independently protected using the traffic keys.

TLS is application protocol independent; higher-level protocols can layer on top of TLS transparently. The TLS standard, however, does not specify how protocols add security with TLS; how to initiate TLS handshaking and how to interpret the authentication certificates exchanged are left to the judgment of the designers and implementors of protocols that run on top of TLS.

This document defines TLS version 1.3. While TLS 1.3 is not directly compatible with previous versions, all versions of TLS incorporate a versioning mechanism which allows clients and servers to interoperably negotiate a common version if one is supported by both peers.

This document supersedes and obsoletes previous versions of TLS including version 1.2 [RFC5246]. It also obsoletes the TLS ticket mechanism defined in [RFC5077] and replaces it with the mechanism defined in Section 2.2. Section 4.2.7 updates [RFC4492] by modifying the protocol attributes used to negotiate Elliptic Curves. Because TLS 1.3 changes the way keys are derived, it updates [RFC5705] as described in Section 7.5. It also changes how OCSP messages are carried and therefore updates [RFC6066] and obsoletes [RFC6961] as described in section Section 4.4.2.1.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in BCP 14 [RFC2119][RFC8174] when, and only when, they appear in all capitals, as shown here.

The following terms are used:

client: The endpoint initiating the TLS connection.

connection: A transport-layer connection between two endpoints.

endpoint: Either the client or server of the connection.

handshake: An initial negotiation between client and server that establishes the parameters of their subsequent interactions within TLS.

peer: An endpoint. When discussing a particular endpoint, “peer” refers to the endpoint that is not the primary subject of discussion.

receiver: An endpoint that is receiving records.

sender: An endpoint that is transmitting records.

server: The endpoint which did not initiate the TLS connection.

RFC EDITOR PLEASE DELETE THIS SECTION.

(*) indicates changes to the wire protocol which may require implementations to update.

draft-28

Add a section on exposure of PSK identities.

draft-27

  • SHOULD->MUST for being able to process “supported_versions” without 0x0304.
  • Much editorial cleanup.

draft-26

  • Clarify that you can’t negotiate pre-TLS 1.3 with supported_versions.

draft-25

  • Add the header to additional data (*)
  • Minor clarifications.
  • IANA cleanup.

draft-24

  • Require that CH2 have version 0303 (*)
  • Some clarifications

draft-23

  • Renumber key_share (*)
  • Add a new extension and new code points to allow negotiating PSS separately for certificates and CertificateVerify (*)
  • Slightly restrict when CCS must be accepted to make implementation easier.
  • Document protocol invariants
  • Add some text on the security of static RSA.

draft-22

  • Implement changes for improved middlebox penetration (*)
  • Move server_certificate_type to encrypted extensions (*)
  • Allow resumption with a different SNI (*)
  • Padding extension can change on HRR (*)
  • Allow an empty ticket_nonce (*)
  • Remove requirement to immediately respond to close_notify with close_notify (allowing half-close)

draft-21

  • Add a per-ticket nonce so that each ticket is associated with a different PSK (*).
  • Clarify that clients should send alerts with the handshake key if possible.
  • Update state machine to show rekeying events
  • Add discussion of 0-RTT and replay. Recommend that implementations implement some anti-replay mechanism.

draft-20

  • Add “post_handshake_auth” extension to negotiate post-handshake authentication (*).
  • Shorten labels for HKDF-Expand-Label so that we can fit within one compression block (*).
  • Define how RFC 7250 works (*).
  • Re-enable post-handshake client authentication even when you do PSK. The previous prohibition was editorial error.
  • Remove cert_type and user_mapping, which don’t work on TLS 1.3 anyway.
  • Added the no_application_protocol alert from [RFC7301] to the list of extensions.
  • Added discussion of traffic analysis and side channel attacks.

draft-19

  • Hash context_value input to Exporters (*)
  • Add an additional Derive-Secret stage to Exporters (*).
  • Hash ClientHello1 in the transcript when HRR is used. This reduces the state that needs to be carried in cookies. (*)
  • Restructure CertificateRequest to have the selectors in extensions. This also allowed defining a “certificate_authorities” extension which can be used by the client instead of trusted_ca_keys (*).
  • Tighten record framing requirements and require checking of them (*).
  • Consolidate “ticket_early_data_info” and “early_data” into a single extension (*).
  • Change end_of_early_data to be a handshake message (*).
  • Add pre-extract Derive-Secret stages to key schedule (*).
  • Remove spurious requirement to implement “pre_shared_key”.
  • Clarify location of “early_data” from server (it goes in EE, as indicated by the table in S 10).
  • Require peer public key validation
  • Add state machine diagram.

draft-18

  • Remove unnecessary resumption_psk which is the only thing expanded from the resumption master secret. (*).
  • Fix signature_algorithms entry in extensions table.
  • Restate rule from RFC 6066 that you can’t resume unless SNI is the same.

draft-17

  • Remove 0-RTT Finished and resumption_context, and replace with a psk_binder field in the PSK itself (*)
  • Restructure PSK key exchange negotiation modes (*)
  • Add max_early_data_size field to TicketEarlyDataInfo (*)
  • Add a 0-RTT exporter and change the transcript for the regular exporter (*)
  • Merge TicketExtensions and Extensions registry. Changes ticket_early_data_info code point (*)
  • Replace Client.key_shares in response to HRR (*)
  • Remove redundant labels for traffic key derivation (*)
  • Harmonize requirements about cipher suite matching: for resumption you need to match KDF but for 0-RTT you need whole cipher suite. This allows PSKs to actually negotiate cipher suites. (*)
  • Move SCT and OCSP into Certificate.extensions (*)
  • Explicitly allow non-offered extensions in NewSessionTicket
  • Explicitly allow predicting client Finished for NST
  • Clarify conditions for allowing 0-RTT with PSK

draft-16

  • Revise version negotiation (*)
  • Change RSASSA-PSS and EdDSA SignatureScheme codepoints for better backwards compatibility (*)
  • Move HelloRetryRequest.selected_group to an extension (*)
  • Clarify the behavior of no exporter context and make it the same as an empty context.(*)
  • New KeyUpdate format that allows for requesting/not-requesting an answer. This also means changes to the key schedule to support independent updates (*)
  • New certificate_required alert (*)
  • Forbid CertificateRequest with 0-RTT and PSK.
  • Relax requirement to check SNI for 0-RTT.

draft-15

  • New negotiation syntax as discussed in Berlin (*)
  • Require CertificateRequest.context to be empty during handshake (*)
  • Forbid empty tickets (*)
  • Forbid application data messages in between post-handshake messages from the same flight (*)
  • Clean up alert guidance (*)
  • Clearer guidance on what is needed for TLS 1.2.
  • Guidance on 0-RTT time windows.
  • Rename a bunch of fields.
  • Remove old PRNG text.
  • Explicitly require checking that handshake records not span key changes.

draft-14

  • Allow cookies to be longer (*)
  • Remove the “context” from EarlyDataIndication as it was undefined and nobody used it (*)
  • Remove 0-RTT EncryptedExtensions and replace the ticket_age extension with an obfuscated version. Also necessitates a change to NewSessionTicket (*).
  • Move the downgrade sentinel to the end of ServerHello.Random to accommodate tlsdate (*).
  • Define ecdsa_sha1 (*).
  • Allow resumption even after fatal alerts. This matches current practice.
  • Remove non-closure warning alerts. Require treating unknown alerts as fatal.
  • Make the rules for accepting 0-RTT less restrictive.
  • Clarify 0-RTT backward-compatibility rules.
  • Clarify how 0-RTT and PSK identities interact.
  • Add a section describing the data limits for each cipher.
  • Major editorial restructuring.
  • Replace the Security Analysis section with a WIP draft.

draft-13

  • Allow server to send SupportedGroups.
  • Remove 0-RTT client authentication
  • Remove (EC)DHE 0-RTT.
  • Flesh out 0-RTT PSK mode and shrink EarlyDataIndication
  • Turn PSK-resumption response into an index to save room
  • Move CertificateStatus to an extension
  • Extra fields in NewSessionTicket.
  • Restructure key schedule and add a resumption_context value.
  • Require DH public keys and secrets to be zero-padded to the size of the group.
  • Remove the redundant length fields in KeyShareEntry.
  • Define a cookie field for HRR.

draft-12

  • Provide a list of the PSK cipher suites.
  • Remove the ability for the ServerHello to have no extensions (this aligns the syntax with the text).
  • Clarify that the server can send application data after its first flight (0.5 RTT data)
  • Revise signature algorithm negotiation to group hash, signature algorithm, and curve together. This is backwards compatible.
  • Make ticket lifetime mandatory and limit it to a week.
  • Make the purpose strings lower-case. This matches how people are implementing for interop.
  • Define exporters.
  • Editorial cleanup

draft-11

  • Port the CFRG curves & signatures work from RFC4492bis.
  • Remove sequence number and version from additional_data, which is now empty.
  • Reorder values in HkdfLabel.
  • Add support for version anti-downgrade mechanism.
  • Update IANA considerations section and relax some of the policies.
  • Unify authentication modes. Add post-handshake client authentication.
  • Remove early_handshake content type. Terminate 0-RTT data with an alert.
  • Reset sequence number upon key change (as proposed by Fournet et al.)

draft-10

  • Remove ClientCertificateTypes field from CertificateRequest and add extensions.
  • Merge client and server key shares into a single extension.

draft-09

  • Change to RSA-PSS signatures for handshake messages.
  • Remove support for DSA.
  • Update key schedule per suggestions by Hugo, Hoeteck, and Bjoern Tackmann.
  • Add support for per-record padding.
  • Switch to encrypted record ContentType.
  • Change HKDF labeling to include protocol version and value lengths.
  • Shift the final decision to abort a handshake due to incompatible certificates to the client rather than having servers abort early.
  • Deprecate SHA-1 with signatures.
  • Add MTI algorithms.

draft-08

  • Remove support for weak and lesser used named curves.
  • Remove support for MD5 and SHA-224 hashes with signatures.
  • Update lists of available AEAD cipher suites and error alerts.
  • Reduce maximum permitted record expansion for AEAD from 2048 to 256 octets.
  • Require digital signatures even when a previous configuration is used.
  • Merge EarlyDataIndication and KnownConfiguration.
  • Change code point for server_configuration to avoid collision with server_hello_done.
  • Relax certificate_list ordering requirement to match current practice.

draft-07

  • Integration of semi-ephemeral DH proposal.
  • Add initial 0-RTT support.
  • Remove resumption and replace with PSK + tickets.
  • Move ClientKeyShare into an extension.
  • Move to HKDF.

draft-06

  • Prohibit RC4 negotiation for backwards compatibility.
  • Freeze & deprecate record layer version field.
  • Update format of signatures with context.
  • Remove explicit IV.

draft-05

  • Prohibit SSL negotiation for backwards compatibility.
  • Fix which MS is used for exporters.

draft-04

  • Modify key computations to include session hash.
  • Remove ChangeCipherSpec.
  • Renumber the new handshake messages to be somewhat more consistent with existing convention and to remove a duplicate registration.
  • Remove renegotiation.
  • Remove point format negotiation.

draft-03

  • Remove GMT time.
  • Merge in support for ECC from RFC 4492 but without explicit curves.
  • Remove the unnecessary length field from the AD input to AEAD ciphers.
  • Rename {Client,Server}KeyExchange to {Client,Server}KeyShare.
  • Add an explicit HelloRetryRequest to reject the client’s.

draft-02

  • Increment version number.
  • Rework handshake to provide 1-RTT mode.
  • Remove custom DHE groups.
  • Remove support for compression.
  • Remove support for static RSA and DH key exchange.
  • Remove support for non-AEAD ciphers.

The following is a list of the major functional differences between TLS 1.2 and TLS 1.3. It is not intended to be exhaustive and there are many minor differences.

  • The list of supported symmetric algorithms has been pruned of all algorithms that are considered legacy. Those that remain all use Authenticated Encryption with Associated Data (AEAD) algorithms. The ciphersuite concept has been changed to separate the authentication and key exchange mechanisms from the record protection algorithm (including secret key length) and a hash to be used with the key derivation function and HMAC.
  • A 0-RTT mode was added, saving a round-trip at connection setup for some application data, at the cost of certain security properties.
  • Static RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy.
  • All handshake messages after the ServerHello are now encrypted. The newly introduced EncryptedExtension message allows various extensions previously sent in clear in the ServerHello to also enjoy confidentiality protection from active attackers.
  • The key derivation functions have been re-designed. The new design allows easier analysis by cryptographers due to their improved key separation properties. The HMAC-based Extract-and-Expand Key Derivation Function (HKDF) is used as an underlying primitive.
  • The handshake state machine has been significantly restructured to be more consistent and to remove superfluous messages such as ChangeCipherSpec (except when needed for middlebox compatibility).
  • Elliptic curve algorithms are now in the base spec and new signature algorithms, such as ed25519 and ed448, are included. TLS 1.3 removed point format negotiation in favor of a single point format for each curve.
  • Other cryptographic improvements including the removal of compression and custom DHE groups, changing the RSA padding to use RSASSA-PSS, and the removal of DSA.
  • The TLS 1.2 version negotiation mechanism has been deprecated in favor of a version list in an extension. This increases compatibility with existing servers that incorrectly implemented version negotiation.
  • Session resumption with and without server-side state as well as the PSK-based ciphersuites of earlier TLS versions have been replaced by a single new PSK exchange.
  • Updated references to point to the updated versions of RFCs, as appropriate (e.g., RFC 5280 rather than RFC 3280).

This document defines several changes that optionally affect implementations of TLS 1.2, including those which do not also support TLS 1.3:

  • A version downgrade protection mechanism is described in Section 4.1.3.
  • RSASSA-PSS signature schemes are defined in Section 4.2.3.
  • The “supported_versions” ClientHello extension can be used to negotiate the version of TLS to use, in preference to the legacy_version field of the ClientHello.
  • The “signature_algorithms_cert” extension allows a client to indicate which signature algorithms it can validate in X.509 certificates

Additionally, this document clarifies some compliance requirements for earlier versions of TLS; see Section 9.3.

The cryptographic parameters used by the secure channel are produced by the TLS handshake protocol. This sub-protocol of TLS is used by the client and server when first communicating with each other. The handshake protocol allows peers to negotiate a protocol version, select cryptographic algorithms, optionally authenticate each other, and establish shared secret keying material. Once the handshake is complete, the peers use the established keys to protect the application layer traffic.

A failure of the handshake or other protocol error triggers the termination of the connection, optionally preceded by an alert message (Section 6).

TLS supports three basic key exchange modes:

  • (EC)DHE (Diffie-Hellman over either finite fields or elliptic curves)
  • PSK-only
  • PSK with (EC)DHE

Figure 1 below shows the basic full TLS handshake:

       Client                                               Server

Key  ^ ClientHello
Exch | + key_share*
     | + signature_algorithms*
     | + psk_key_exchange_modes*
     v + pre_shared_key*         -------->
                                                       ServerHello  ^ Key
                                                      + key_share*  | Exch
                                                 + pre_shared_key*  v
                                             {EncryptedExtensions}  ^  Server
                                             {CertificateRequest*}  v  Params
                                                    {Certificate*}  ^
                                              {CertificateVerify*}  | Auth
                                                        {Finished}  v<--------     [Application Data*]
     ^ {Certificate*}
Auth | {CertificateVerify*}
     v {Finished}                -------->
       [Application Data]        <------->      [Application Data]

              +  Indicates noteworthy extensions sent in the
                 previously noted message.

              *  Indicates optional or situation-dependent
                 messages/extensions that are not always sent.

              {} Indicates messages protected using keys
                 derived from a [sender]_handshake_traffic_secret.

              [] Indicates messages protected using keys
                 derived from [sender]_application_traffic_secret_N

Figure 1: Message flow for full TLS Handshake

The handshake can be thought of as having three phases (indicated in the diagram above):

  • Key Exchange: Establish shared keying material and select the cryptographic parameters. Everything after this phase is encrypted.
  • Server Parameters: Establish other handshake parameters (whether the client is authenticated, application layer protocol support, etc.).
  • Authentication: Authenticate the server (and optionally the client) and provide key confirmation and handshake integrity.

In the Key Exchange phase, the client sends the ClientHello (Section 4.1.2) message, which contains a random nonce (ClientHello.random); its offered protocol versions; a list of symmetric cipher/HKDF hash pairs; either a set of Diffie-Hellman key shares (in the “key_share” extension Section 4.2.8), a set of pre-shared key labels (in the “pre_shared_key” extension Section 4.2.11) or both; and potentially additional extensions. Additional fields and/or messages may also be present for middlebox compatibility.

The server processes the ClientHello and determines the appropriate cryptographic parameters for the connection. It then responds with its own ServerHello (Section 4.1.3), which indicates the negotiated connection parameters. The combination of the ClientHello and the ServerHello determines the shared keys. If (EC)DHE key establishment is in use, then the ServerHello contains a “key_share” extension with the server’s ephemeral Diffie-Hellman share; the server’s share MUST be in the same group as one of the client’s shares. If PSK key establishment is in use, then the ServerHello contains a “pre_shared_key” extension indicating which of the client’s offered PSKs was selected. Note that implementations can use (EC)DHE and PSK together, in which case both extensions will be supplied.

The server then sends two messages to establish the Server Parameters:

EncryptedExtensions:
responses to ClientHello extensions that are not required to determine the cryptographic parameters, other than those that are specific to individual certificates. [Section 4.3.1]
CertificateRequest:
if certificate-based client authentication is desired, the desired parameters for that certificate. This message is omitted if client authentication is not desired. [Section 4.3.2]

Finally, the client and server exchange Authentication messages. TLS uses the same set of messages every time that certificate-based authentication is needed. (PSK-based authentication happens as a side effect of key exchange.) Specifically:

Certificate:
the certificate of the endpoint and any per-certificate extensions. This message is omitted by the server if not authenticating with a certificate and by the client if the server did not send CertificateRequest (thus indicating that the client should not authenticate with a certificate). Note that if raw public keys [RFC7250] or the cached information extension [RFC7924] are in use, then this message will not contain a certificate but rather some other value corresponding to the server’s long-term key. [Section 4.4.2]
CertificateVerify:
a signature over the entire handshake using the private key corresponding to the public key in the Certificate message. This message is omitted if the endpoint is not authenticating via a certificate. [Section 4.4.3]
Finished:
a MAC (Message Authentication Code) over the entire handshake. This message provides key confirmation, binds the endpoint’s identity to the exchanged keys, and in PSK mode also authenticates the handshake. [Section 4.4.4]

Upon receiving the server’s messages, the client responds with its Authentication messages, namely Certificate and CertificateVerify (if requested), and Finished.

At this point, the handshake is complete, and the client and server derive the keying material required by the record layer to exchange application-layer data protected through authenticated encryption. Application data MUST NOT be sent prior to sending the Finished message, except as specified in [Section 2.3]. Note that while the server may send application data prior to receiving the client’s Authentication messages, any data sent at that point is, of course, being sent to an unauthenticated peer.

If the client has not provided a sufficient “key_share” extension (e.g., it includes only DHE or ECDHE groups unacceptable to or unsupported by the server), the server corrects the mismatch with a HelloRetryRequest and the client needs to restart the handshake with an appropriate “key_share” extension, as shown in Figure 2. If no common cryptographic parameters can be negotiated, the server MUST abort the handshake with an appropriate alert.

         Client                                               Server

         ClientHello
         + key_share             -------->
                                                   HelloRetryRequest<--------               + key_share
         ClientHello
         + key_share             -------->
                                                         ServerHello
                                                         + key_share
                                               {EncryptedExtensions}
                                               {CertificateRequest*}
                                                      {Certificate*}
                                                {CertificateVerify*}
                                                          {Finished}<--------       [Application Data*]
         {Certificate*}
         {CertificateVerify*}
         {Finished}              -------->
         [Application Data]      <------->        [Application Data]

Figure 2: Message flow for a full handshake with mismatched parameters

Note: The handshake transcript incorporates the initial ClientHello/HelloRetryRequest exchange; it is not reset with the new ClientHello.

TLS also allows several optimized variants of the basic handshake, as described in the following sections.

Although TLS PSKs can be established out of band, PSKs can also be established in a previous connection and then used to establish a new connection (“session resumption” or “resuming” with a PSK). Once a handshake has completed, the server can send to the client a PSK identity that corresponds to a unique key derived from the initial handshake (see Section 4.6.1). The client can then use that PSK identity in future handshakes to negotiate the use of the associated PSK. If the server accepts the PSK, then the security context of the new connection is cryptographically tied to the original connection and the key derived from the initial handshake is used to bootstrap the cryptographic state instead of a full handshake. In TLS 1.2 and below, this functionality was provided by “session IDs” and “session tickets” [RFC5077]. Both mechanisms are obsoleted in TLS 1.3.

PSKs can be used with (EC)DHE key exchange in order to provide forward secrecy in combination with shared keys, or can be used alone, at the cost of losing forward secrecy for the application data.

Figure 3 shows a pair of handshakes in which the first establishes a PSK and the second uses it:

       Client                                               Server

Initial Handshake:
       ClientHello
       + key_share               -------->
                                                       ServerHello
                                                       + key_share
                                             {EncryptedExtensions}
                                             {CertificateRequest*}
                                                    {Certificate*}
                                              {CertificateVerify*}
                                                        {Finished}<--------     [Application Data*]
       {Certificate*}
       {CertificateVerify*}
       {Finished}                --------><--------      [NewSessionTicket]
       [Application Data]        <------->      [Application Data]


Subsequent Handshake:
       ClientHello
       + key_share*
       + pre_shared_key          -------->
                                                       ServerHello
                                                  + pre_shared_key
                                                      + key_share*
                                             {EncryptedExtensions}
                                                        {Finished}<--------     [Application Data*]
       {Finished}                -------->
       [Application Data]        <------->      [Application Data]

Figure 3: Message flow for resumption and PSK

As the server is authenticating via a PSK, it does not send a Certificate or a CertificateVerify message. When a client offers resumption via PSK, it SHOULD also supply a “key_share” extension to the server to allow the server to decline resumption and fall back to a full handshake, if needed. The server responds with a “pre_shared_key” extension to negotiate use of PSK key establishment and can (as shown here) respond with a “key_share” extension to do (EC)DHE key establishment, thus providing forward secrecy.

When PSKs are provisioned out of band, the PSK identity and the KDF hash algorithm to be used with the PSK MUST also be provisioned.

Note:
When using an out-of-band provisioned pre-shared secret, a critical consideration is using sufficient entropy during the key generation, as discussed in [RFC4086]. Deriving a shared secret from a password or other low-entropy sources is not secure. A low-entropy secret, or password, is subject to dictionary attacks based on the PSK binder. The specified PSK authentication is not a strong password-based authenticated key exchange even when used with Diffie-Hellman key establishment. Specifically, it does not prevent an attacker that can observe the handshake from performing a brute-force attack on the password/pre-shared key.

When clients and servers share a PSK (either obtained externally or via a previous handshake), TLS 1.3 allows clients to send data on the first flight (“early data”). The client uses the PSK to authenticate the server and to encrypt the early data.

As shown in Figure 4, the 0-RTT data is just added to the 1-RTT handshake in the first flight. The rest of the handshake uses the same messages as for a 1-RTT handshake with PSK resumption.

         Client                                               Server

         ClientHello
         + early_data
         + key_share*
         + psk_key_exchange_modes
         + pre_shared_key
         (Application Data*)     -------->
                                                         ServerHello
                                                    + pre_shared_key
                                                        + key_share*
                                               {EncryptedExtensions}
                                                       + early_data*
                                                          {Finished}<--------       [Application Data*]
         (EndOfEarlyData)
         {Finished}              -------->
         [Application Data]      <------->        [Application Data]

               +  Indicates noteworthy extensions sent in the
                  previously noted message.

               *  Indicates optional or situation-dependent
                  messages/extensions that are not always sent.

               () Indicates messages protected using keys
                  derived from client_early_traffic_secret.

               {} Indicates messages protected using keys
                  derived from a [sender]_handshake_traffic_secret.

               [] Indicates messages protected using keys
                  derived from [sender]_application_traffic_secret_N

Figure 4: Message flow for a zero round trip handshake

IMPORTANT NOTE: The security properties for 0-RTT data are weaker than those for other kinds of TLS data. Specifically:

  1. This data is not forward secret, as it is encrypted solely under keys derived using the offered PSK.
  2. There are no guarantees of non-replay between connections. Protection against replay for ordinary TLS 1.3 1-RTT data is provided via the server’s Random value, but 0-RTT data does not depend on the ServerHello and therefore has weaker guarantees. This is especially relevant if the data is authenticated either with TLS client authentication or inside the application protocol. The same warnings apply to any use of the early_exporter_master_secret.

0-RTT data cannot be duplicated within a connection (i.e., the server will not process the same data twice for the same connection) and an attacker will not be able to make 0-RTT data appear to be 1-RTT data (because it is protected with different keys.) Appendix E.5 contains a description of potential attacks and Section 8 describes mechanisms which the server can use to limit the impact of replay.

This document deals with the formatting of data in an external representation. The following very basic and somewhat casually defined presentation syntax will be used.

The representation of all data items is explicitly specified. The basic data block size is one byte (i.e., 8 bits). Multiple byte data items are concatenations of bytes, from left to right, from top to bottom. From the byte stream, a multi-byte item (a numeric in the example) is formed (using C notation) by:

   value = (byte[0] << 8*(n-1)) | (byte[1] << 8*(n-2)) |
           ... | byte[n-1];

This byte ordering for multi-byte values is the commonplace network byte order or big-endian format.

Comments begin with “/*” and end with “*/”.

Optional components are denoted by enclosing them in “[[ ]]” double brackets.

Single-byte entities containing uninterpreted data are of type opaque.

A type alias T’ for an existing type T is defined by:

   T T';

The basic numeric data type is an unsigned byte (uint8). All larger numeric data types are formed from fixed-length series of bytes concatenated as described in Section 3.1 and are also unsigned. The following numeric types are predefined.

   uint8 uint16[2];
   uint8 uint24[3];
   uint8 uint32[4];
   uint8 uint64[8];

All values, here and elsewhere in the specification, are transmitted in network byte (big-endian) order; the uint32 represented by the hex bytes 01 02 03 04 is equivalent to the decimal value 16909060.

A vector (single-dimensioned array) is a stream of homogeneous data elements. The size of the vector may be specified at documentation time or left unspecified until runtime. In either case, the length declares the number of bytes, not the number of elements, in the vector. The syntax for specifying a new type, T’, that is a fixed-length vector of type T is

   T T'[n];

Here, T’ occupies n bytes in the data stream, where n is a multiple of the size of T. The length of the vector is not included in the encoded stream.

In the following example, Datum is defined to be three consecutive bytes that the protocol does not interpret, while Data is three consecutive Datum, consuming a total of nine bytes.

   opaque Datum[3];      /* three uninterpreted bytes */
   Datum Data[9];        /* 3 consecutive 3-byte vectors */

Variable-length vectors are defined by specifying a subrange of legal lengths, inclusively, using the notation <floor..ceiling>. When these are encoded, the actual length precedes the vector’s contents in the byte stream. The length will be in the form of a number consuming as many bytes as required to hold the vector’s specified maximum (ceiling) length. A variable-length vector with an actual length field of zero is referred to as an empty vector.

   T T'<floor..ceiling>;

In the following example, mandatory is a vector that must contain between 300 and 400 bytes of type opaque. It can never be empty. The actual length field consumes two bytes, a uint16, which is sufficient to represent the value 400 (see Section 3.3). Similarly, longer can represent up to 800 bytes of data, or 400 uint16 elements, and it may be empty. Its encoding will include a two-byte actual length field prepended to the vector. The length of an encoded vector must be an exact multiple of the length of a single element (e.g., a 17-byte vector of uint16 would be illegal).

   opaque mandatory<300..400>;
         /* length field is 2 bytes, cannot be empty */
   uint16 longer<0..800>;
         /* zero to 400 16-bit unsigned integers */

An additional sparse data type is available called enum or enumerated. Each definition is a different type. Only enumerateds of the same type may be assigned or compared. Every element of an enumerated must be assigned a value, as demonstrated in the following example. Since the elements of the enumerated are not ordered, they can be assigned any unique value, in any order.

   enum { e1(v1), e2(v2), ... , en(vn) [[, (n)]] } Te;

Future extensions or additions to the protocol may define new values. Implementations need to be able to parse and ignore unknown values unless the definition of the field states otherwise.

An enumerated occupies as much space in the byte stream as would its maximal defined ordinal value. The following definition would cause one byte to be used to carry fields of type Color.

   enum { red(3), blue(5), white(7) } Color;

One may optionally specify a value without its associated tag to force the width definition without defining a superfluous element.

In the following example, Taste will consume two bytes in the data stream but can only assume the values 1, 2, or 4 in the current version of the protocol.

   enum { sweet(1), sour(2), bitter(4), (32000) } Taste;

The names of the elements of an enumeration are scoped within the defined type. In the first example, a fully qualified reference to the second element of the enumeration would be Color.blue. Such qualification is not required if the target of the assignment is well specified.

   Color color = Color.blue;     /* overspecified, legal */
   Color color = blue;           /* correct, type implicit */

The names assigned to enumerateds do not need to be unique. The numerical value can describe a range over which the same name applies. The value includes the minimum and maximum inclusive values in that range, separated by two period characters. This is principally useful for reserving regions of the space.

   enum { sad(0), meh(1..254), happy(255) } Mood;

Structure types may be constructed from primitive types for convenience. Each specification declares a new, unique type. The syntax for definition is much like that of C.

   struct {
       T1 f1;
       T2 f2;
       ...
       Tn fn;
   } T;

Fixed- and variable-length vector fields are allowed using the standard vector syntax. Structures V1 and V2 in the variants example below demonstrate this.

The fields within a structure may be qualified using the type’s name, with a syntax much like that available for enumerateds. For example, T.f2 refers to the second field of the previous declaration.

Fields and variables may be assigned a fixed value using “=”, as in:

   struct {
       T1 f1 = 8;  /* T.f1 must always be 8 */
       T2 f2;
   } T;

Defined structures may have variants based on some knowledge that is available within the environment. The selector must be an enumerated type that defines the possible variants the structure defines. Each arm of the select specifies the type of that variant’s field and an optional field label. The mechanism by which the variant is selected at runtime is not prescribed by the presentation language.

   struct {
       T1 f1;
       T2 f2;
       ....
       Tn fn;
       select (E) {
           case e1: Te1 [[fe1]];
           case e2: Te2 [[fe2]];
           ....
           case en: Ten [[fen]];
       };
   } Tv;

For example:

   enum { apple(0), orange(1) } VariantTag;

   struct {
       uint16 number;
       opaque string<0..10>; /* variable length */
   } V1;

   struct {
       uint32 number;
       opaque string[10];    /* fixed length */
   } V2;

   struct {
       VariantTag type;
       select (VariantRecord.type) {
           case apple:  V1;
           case orange: V2;
       };
   } VariantRecord;

The handshake protocol is used to negotiate the security parameters of a connection. Handshake messages are supplied to the TLS record layer, where they are encapsulated within one or more TLSPlaintext or TLSCiphertext structures, which are processed and transmitted as specified by the current active connection state.

   enum {
       client_hello(1),
       server_hello(2),
       new_session_ticket(4),
       end_of_early_data(5),
       encrypted_extensions(8),
       certificate(11),
       certificate_request(13),
       certificate_verify(15),
       finished(20),
       key_update(24),
       message_hash(254),
       (255)
   } HandshakeType;

   struct {
       HandshakeType msg_type;    /* handshake type */
       uint24 length;             /* bytes in message */
       select (Handshake.msg_type) {
           case client_hello:          ClientHello;
           case server_hello:          ServerHello;
           case end_of_early_data:     EndOfEarlyData;
           case encrypted_extensions:  EncryptedExtensions;
           case certificate_request:   CertificateRequest;
           case certificate:           Certificate;
           case certificate_verify:    CertificateVerify;
           case finished:              Finished;
           case new_session_ticket:    NewSessionTicket;
           case key_update:            KeyUpdate;
       };
   } Handshake;

Protocol messages MUST be sent in the order defined in Section 4.4.1 and shown in the diagrams in Section 2. A peer which receives a handshake message in an unexpected order MUST abort the handshake with an “unexpected_message” alert.

New handshake message types are assigned by IANA as described in Section 11.

The key exchange messages are used to determine the security capabilities of the client and the server and to establish shared secrets including the traffic keys used to protect the rest of the handshake and the data.

4.1.1.Cryptographic Negotiation

In TLS, the cryptographic negotiation proceeds by the client offering the following four sets of options in its ClientHello:

  • A list of cipher suites which indicates the AEAD algorithm/HKDF hash pairs which the client supports.
  • A “supported_groups” (Section 4.2.7) extension which indicates the (EC)DHE groups which the client supports and a “key_share” (Section 4.2.8) extension which contains (EC)DHE shares for some or all of these groups.
  • A “signature_algorithms” (Section 4.2.3) extension which indicates the signature algorithms which the client can accept.
  • A “pre_shared_key” (Section 4.2.11) extension which contains a list of symmetric key identities known to the client and a “psk_key_exchange_modes” (Section 4.2.9) extension which indicates the key exchange modes that may be used with PSKs.

If the server does not select a PSK, then the first three of these options are entirely orthogonal: the server independently selects a cipher suite, an (EC)DHE group and key share for key establishment, and a signature algorithm/certificate pair to authenticate itself to the client. If there is no overlap between the received “supported_groups” and the groups supported by the server then the server MUST abort the handshake with a “handshake_failure” or an “insufficient_security” alert.

If the server selects a PSK, then it MUST also select a key establishment mode from the set indicated by client’s “psk_key_exchange_modes” extension (at present, PSK alone or with (EC)DHE). Note that if the PSK can be used without (EC)DHE then non-overlap in the “supported_groups” parameters need not be fatal, as it is in the non-PSK case discussed in the previous paragraph.

If the server selects an (EC)DHE group and the client did not offer a compatible “key_share” extension in the initial ClientHello, the server MUST respond with a HelloRetryRequest (Section 4.1.4) message.

If the server successfully selects parameters and does not require a HelloRetryRequest, it indicates the selected parameters in the ServerHello as follows:

  • If PSK is being used, then the server will send a “pre_shared_key” extension indicating the selected key.
  • If PSK is not being used, then (EC)DHE and certificate-based authentication are always used.
  • When (EC)DHE is in use, the server will also provide a “key_share” extension.
  • When authenticating via a certificate, the server will send the Certificate (Section 4.4.2) and CertificateVerify (Section 4.4.3) messages. In TLS 1.3 as defined by this document, either a PSK or a certificate is always used, but not both. Future documents may define how to use them together.

If the server is unable to negotiate a supported set of parameters (i.e., there is no overlap between the client and server parameters), it MUST abort the handshake with either a “handshake_failure” or “insufficient_security” fatal alert (see Section 6).

4.1.2.Client Hello

When a client first connects to a server, it is REQUIRED to send the ClientHello as its first TLS message. The client will also send a ClientHello when the server has responded to its ClientHello with a HelloRetryRequest. In that case, the client MUST send the same ClientHello without modification, except:

  • If a “key_share” extension was supplied in the HelloRetryRequest, replacing the list of shares with a list containing a single KeyShareEntry from the indicated group.
  • Removing the “early_data” extension (Section 4.2.10) if one was present. Early data is not permitted after HelloRetryRequest.
  • Including a “cookie” extension if one was provided in the HelloRetryRequest.
  • Updating the “pre_shared_key” extension if present by recomputing the “obfuscated_ticket_age” and binder values and (optionally) removing any PSKs which are incompatible with the server’s indicated cipher suite.
  • Optionally adding, removing, or changing the length of the “padding” extension [RFC7685].
  • Other modifications that may be allowed by an extension defined in the future and present in the HelloRetryRequest.

Because TLS 1.3 forbids renegotiation, if a server has negotiated TLS 1.3 and receives a ClientHello at any other time, it MUST terminate the connection with an “unexpected_message” alert.

If a server established a TLS connection with a previous version of TLS and receives a TLS 1.3 ClientHello in a renegotiation, it MUST retain the previous protocol version. In particular, it MUST NOT negotiate TLS 1.3.

Structure of this message:

   uint16 ProtocolVersion;
   opaque Random[32];

   uint8 CipherSuite[2];    /* Cryptographic suite selector */

   struct {
       ProtocolVersion legacy_version = 0x0303;    /* TLS v1.2 */
       Random random;
       opaque legacy_session_id<0..32>;
       CipherSuite cipher_suites<2..2^16-2>;
       opaque legacy_compression_methods<1..2^8-1>;
       Extension extensions<8..2^16-1>;
   } ClientHello;
legacy_version
In previous versions of TLS, this field was used for version negotiation and represented the highest version number supported by the client. Experience has shown that many servers do not properly implement version negotiation, leading to “version intolerance” in which the server rejects an otherwise acceptable ClientHello with a version number higher than it supports. In TLS 1.3, the client indicates its version preferences in the “supported_versions” extension (Section 4.2.1) and the legacy_version field MUST be set to 0x0303, which is the version number for TLS 1.2. (See Appendix D for details about backward compatibility.)
random
32 bytes generated by a secure random number generator. See Appendix C for additional information.
legacy_session_id
Versions of TLS before TLS 1.3 supported a “session resumption” feature which has been merged with Pre-Shared Keys in this version (see Section 2.2). A client which has a cached session ID set by a pre-TLS 1.3 server SHOULD set this field to that value. In compatibility mode (see Appendix D.4) this field MUST be non-empty, so a client not offering a pre-TLS 1.3 session MUST generate a new 32-byte value. This value need not be random but SHOULD be unpredictable to avoid implementations fixating on a specific value (also known as ossification). Otherwise, it MUST be set as a zero length vector (i.e., a single zero byte length field).
cipher_suites
This is a list of the symmetric cipher options supported by the client, specifically the record protection algorithm (including secret key length) and a hash to be used with HKDF, in descending order of client preference. If the list contains cipher suites that the server does not recognize, support or wish to use, the server MUST ignore those cipher suites and process the remaining ones as usual. Values are defined in Appendix B.4. If the client is attempting a PSK key establishment, it SHOULD advertise at least one cipher suite indicating a Hash associated with the PSK.
legacy_compression_methods
Versions of TLS before 1.3 supported compression with the list of supported compression methods being sent in this field. For every TLS 1.3 ClientHello, this vector MUST contain exactly one byte, set to zero, which corresponds to the “null” compression method in prior versions of TLS. If a TLS 1.3 ClientHello is received with any other value in this field, the server MUST abort the handshake with an “illegal_parameter” alert. Note that TLS 1.3 servers might receive TLS 1.2 or prior ClientHellos which contain other compression methods and (if negotiating such a prior version) MUST follow the procedures for the appropriate prior version of TLS. TLS 1.3 ClientHellos are identified as having a legacy_version of 0x0303 and a supported_versions extension present with 0x0304 as the highest version indicated therein.
extensions
Clients request extended functionality from servers by sending data in the extensions field. The actual “Extension” format is defined in Section 4.2. In TLS 1.3, use of certain extensions is mandatory, as functionality is moved into extensions to preserve ClientHello compatibility with previous versions of TLS. Servers MUST ignore unrecognized extensions.

All versions of TLS allow an extensions field to optionally follow the compression_methods field. TLS 1.3 ClientHello messages always contain extensions (minimally “supported_versions”, otherwise they will be interpreted as TLS 1.2 ClientHello messages). However, TLS 1.3 servers might receive ClientHello messages without an extensions field from prior versions of TLS. The presence of extensions can be detected by determining whether there are bytes following the compression_methods field at the end of the ClientHello. Note that this method of detecting optional data differs from the normal TLS method of having a variable-length field, but it is used for compatibility with TLS before extensions were defined. TLS 1.3 servers will need to perform this check first and only attempt to negotiate TLS 1.3 if the “supported_versions” extension is present. If negotiating a version of TLS prior to 1.3, a server MUST check that the message either contains no data after legacy_compression_methods or that it contains a valid extensions block with no data following. If not, then it MUST abort the handshake with a “decode_error” alert.

In the event that a client requests additional functionality using extensions, and this functionality is not supplied by the server, the client MAY abort the handshake.

After sending the ClientHello message, the client waits for a ServerHello or HelloRetryRequest message. If early data is in use, the client may transmit early application data (Section 2.3) while waiting for the next handshake message.

4.1.3.Server Hello

The server will send this message in response to a ClientHello message to proceed with the handshake if it is able to negotiate an acceptable set of handshake parameters based on the ClientHello.

Structure of this message:

   struct {
       ProtocolVersion legacy_version = 0x0303;    /* TLS v1.2 */
       Random random;
       opaque legacy_session_id_echo<0..32>;
       CipherSuite cipher_suite;
       uint8 legacy_compression_method = 0;
       Extension extensions<6..2^16-1>;
   } ServerHello;
legacy_version
In previous versions of TLS, this field was used for version negotiation and represented the selected version number for the connection. Unfortunately, some middleboxes fail when presented with new values. In TLS 1.3, the TLS server indicates its version using the “supported_versions” extension (Section 4.2.1), and the legacy_version field MUST be set to 0x0303, which is the version number for TLS 1.2. (See Appendix D for details about backward compatibility.)
random
32 bytes generated by a secure random number generator. See Appendix C for additional information. The last eight bytes MUST be overwritten as described below if negotiating TLS 1.2 or TLS 1.1, but the remaining bytes MUST be random. This structure is generated by the server and MUST be generated independently of the ClientHello.random.
legacy_session_id_echo
The contents of the client’s legacy_session_id field. Note that this field is echoed even if the client’s value corresponded to a cached pre-TLS 1.3 session which the server has chosen not to resume. A client which receives a legacy_session_id_echo field that does not match what it sent in the ClientHello MUST abort the handshake with an “illegal_parameter” alert.
cipher_suite
The single cipher suite selected by the server from the list in ClientHello.cipher_suites. A client which receives a cipher suite that was not offered MUST abort the handshake with an “illegal_parameter” alert.
legacy_compression_method
A single byte which MUST have the value 0.
extensions
A list of extensions. The ServerHello MUST only include extensions which are required to establish the cryptographic context and negotiate the protocol version. All TLS 1.3 ServerHello messages MUST contain the “supported_versions” extension. Current ServerHello messages additionally contain either the “pre_shared_key” or “key_share” extensions, or both when using a PSK with (EC)DHE key establishment. Other extensions are sent separately in the EncryptedExtensions message.

For reasons of backward compatibility with middleboxes (see Appendix D.4) the HelloRetryRequest message uses the same structure as the ServerHello, but with Random set to the special value of the SHA-256 of “HelloRetryRequest”:

  CF 21 AD 74 E5 9A 61 11 BE 1D 8C 02 1E 65 B8 91
  C2 A2 11 16 7A BB 8C 5E 07 9E 09 E2 C8 A8 33 9C

Upon receiving a message with type server_hello, implementations MUST first examine the Random value and if it matches this value, process it as described in Section 4.1.4).

TLS 1.3 has a downgrade protection mechanism embedded in the server’s random value. TLS 1.3 servers which negotiate TLS 1.2 or below in response to a ClientHello MUST set the last eight bytes of their Random value specially.

If negotiating TLS 1.2, TLS 1.3 servers MUST set the last eight bytes of their Random value to the bytes:

  44 4F 57 4E 47 52 44 01

If negotiating TLS 1.1 or below, TLS 1.3 servers MUST and TLS 1.2 servers SHOULD set the last eight bytes of their Random value to the bytes:

  44 4F 57 4E 47 52 44 00

TLS 1.3 clients receiving a ServerHello indicating TLS 1.2 or below MUST check that the last eight bytes are not equal to either of these values. TLS 1.2 clients SHOULD also check that the last eight bytes are not equal to the second value if the ServerHello indicates TLS 1.1 or below. If a match is found, the client MUST abort the handshake with an “illegal_parameter” alert. This mechanism provides limited protection against downgrade attacks over and above what is provided by the Finished exchange: because the ServerKeyExchange, a message present in TLS 1.2 and below, includes a signature over both random values, it is not possible for an active attacker to modify the random values without detection as long as ephemeral ciphers are used. It does not provide downgrade protection when static RSA is used.

Note: This is a change from [RFC5246], so in practice many TLS 1.2 clients and servers will not behave as specified above.

A legacy TLS client performing renegotiation with TLS 1.2 or prior and which receives a TLS 1.3 ServerHello during renegotiation MUST abort the handshake with a “protocol_version” alert. Note that renegotiation is not possible when TLS 1.3 has been negotiated.

RFC EDITOR: PLEASE REMOVE THE FOLLOWING PARAGRAPH Implementations of draft versions (see Section 4.2.1.1) of this specification SHOULD NOT implement this mechanism on either client and server. A pre-RFC client connecting to RFC servers, or vice versa, will appear to downgrade to TLS 1.2. With the mechanism enabled, this will cause an interoperability failure.

4.1.4.Hello Retry Request

The server will send this message in response to a ClientHello message if it is able to find an acceptable set of parameters but the ClientHello does not contain sufficient information to proceed with the handshake. As discussed in Section 4.1.3, the HelloRetryRequest has the same format as a ServerHello message, and the legacy_version, legacy_session_id_echo, cipher_suite, and legacy_compression methods fields have the same meaning. However, for convenience we discuss HelloRetryRequest throughout this document as if it were a distinct message.

The server’s extensions MUST contain “supported_versions” and otherwise the server SHOULD send only the extensions necessary for the client to generate a correct ClientHello pair. As with ServerHello, a HelloRetryRequest MUST NOT contain any extensions that were not first offered by the client in its ClientHello, with the exception of optionally the “cookie” (see Section 4.2.2) extension.

Upon receipt of a HelloRetryRequest, the client MUST check the legacy_version, legacy_session_id_echo, cipher_suite, and legacy_compression_method as specified in Section 4.1.3 and then process the extensions, starting with determining the version using “supported_versions”. Clients MUST abort the handshake with an “illegal_parameter” alert if the HelloRetryRequest would not result in any change in the ClientHello. If a client receives a second HelloRetryRequest in the same connection (i.e., where the ClientHello was itself in response to a HelloRetryRequest), it MUST abort the handshake with an “unexpected_message” alert.

Otherwise, the client MUST process all extensions in the HelloRetryRequest and send a second updated ClientHello. The HelloRetryRequest extensions defined in this specification are:

In addition, in its updated ClientHello, the client SHOULD NOT offer any pre-shared keys associated with a hash other than that of the selected cipher suite. This allows the client to avoid having to compute partial hash transcripts for multiple hashes in the second ClientHello. A client which receives a cipher suite that was not offered MUST abort the handshake. Servers MUST ensure that they negotiate the same cipher suite when receiving a conformant updated ClientHello (if the server selects the cipher suite as the first step in the negotiation, then this will happen automatically). Upon receiving the ServerHello, clients MUST check that the cipher suite supplied in the ServerHello is the same as that in the HelloRetryRequest and otherwise abort the handshake with an “illegal_parameter” alert.

The value of selected_version in the HelloRetryRequest “supported_versions” extension MUST be retained in the ServerHello, and a client MUST abort the handshake with an “illegal_parameter” alert if the value changes.

A number of TLS messages contain tag-length-value encoded extensions structures.

   struct {
       ExtensionType extension_type;
       opaque extension_data<0..2^16-1>;
   } Extension;

   enum {
       server_name(0),                             /* RFC 6066 */
       max_fragment_length(1),                     /* RFC 6066 */
       status_request(5),                          /* RFC 6066 */
       supported_groups(10),                       /* RFC 4492, 7919 */
       signature_algorithms(13),                   /* [[this document]] */
       use_srtp(14),                               /* RFC 5764 */
       heartbeat(15),                              /* RFC 6520 */
       application_layer_protocol_negotiation(16), /* RFC 7301 */
       signed_certificate_timestamp(18),           /* RFC 6962 */
       client_certificate_type(19),                /* RFC 7250 */
       server_certificate_type(20),                /* RFC 7250 */
       padding(21),                                /* RFC 7685 */
       pre_shared_key(41),                         /* [[this document]] */
       early_data(42),                             /* [[this document]] */
       supported_versions(43),                     /* [[this document]] */
       cookie(44),                                 /* [[this document]] */
       psk_key_exchange_modes(45),                 /* [[this document]] */
       certificate_authorities(47),                /* [[this document]] */
       oid_filters(48),                            /* [[this document]] */
       post_handshake_auth(49),                    /* [[this document]] */
       signature_algorithms_cert(50),              /* [[this document]] */
       key_share(51),                              /* [[this document]] */
       (65535)
   } ExtensionType;

Here:

  • “extension_type” identifies the particular extension type.
  • “extension_data” contains information specific to the particular extension type.

The list of extension types is maintained by IANA as described in Section 11.

Extensions are generally structured in a request/response fashion, though some extensions are just indications with no corresponding response. The client sends its extension requests in the ClientHello message and the server sends its extension responses in the ServerHello, EncryptedExtensions, HelloRetryRequest and Certificate messages. The server sends extension requests in the CertificateRequest message which a client MAY respond to with a Certificate message. The server MAY also send unsolicited extensions in the NewSessionTicket, though the client does not respond directly to these.

Implementations MUST NOT send extension responses if the remote endpoint did not send the corresponding extension requests, with the exception of the “cookie” extension in HelloRetryRequest. Upon receiving such an extension, an endpoint MUST abort the handshake with an “unsupported_extension” alert.

The table below indicates the messages where a given extension may appear, using the following notation: CH (ClientHello), SH (ServerHello), EE (EncryptedExtensions), CT (Certificate), CR (CertificateRequest), NST (NewSessionTicket) and HRR (HelloRetryRequest). If an implementation receives an extension which it recognizes and which is not specified for the message in which it appears it MUST abort the handshake with an “illegal_parameter” alert.

ExtensionTLS 1.3
server_name [RFC6066]CH, EE
max_fragment_length [RFC6066]CH, EE
status_request [RFC6066]CH, CR, CT
supported_groups [RFC7919]CH, EE
signature_algorithms [RFC5246]CH, CR
use_srtp [RFC5764]CH, EE
heartbeat [RFC6520]CH, EE
application_layer_protocol_negotiation [RFC7301]CH, EE
signed_certificate_timestamp [RFC6962]CH, CR, CT
client_certificate_type [RFC7250]CH, EE
server_certificate_type [RFC7250]CH, EE
padding [RFC7685]CH
key_share [[this document]]CH, SH, HRR
pre_shared_key [[this document]]CH, SH
psk_key_exchange_modes [[this document]]CH
early_data [[this document]]CH, EE, NST
cookie [[this document]]CH, HRR
supported_versions [[this document]]CH, SH, HRR
certificate_authorities [[this document]]CH, CR
oid_filters [[this document]]CR
post_handshake_auth [[this document]]CH
signature_algorithms_cert [[this document]]CH, CR

When multiple extensions of different types are present, the extensions MAY appear in any order, with the exception of “pre_shared_key” Section 4.2.11 which MUST be the last extension in the ClientHello. There MUST NOT be more than one extension of the same type in a given extension block.

In TLS 1.3, unlike TLS 1.2, extensions are negotiated for each handshake even when in resumption-PSK mode. However, 0-RTT parameters are those negotiated in the previous handshake; mismatches may require rejecting 0-RTT (see Section 4.2.10).

There are subtle (and not so subtle) interactions that may occur in this protocol between new features and existing features which may result in a significant reduction in overall security. The following considerations should be taken into account when designing new extensions:

  • Some cases where a server does not agree to an extension are error conditions (e.g., the handshake cannot continue), and some are simply refusals to support particular features. In general, error alerts should be used for the former and a field in the server extension response for the latter.
  • Extensions should, as far as possible, be designed to prevent any attack that forces use (or non-use) of a particular feature by manipulation of handshake messages. This principle should be followed regardless of whether the feature is believed to cause a security problem. Often the fact that the extension fields are included in the inputs to the Finished message hashes will be sufficient, but extreme care is needed when the extension changes the meaning of messages sent in the handshake phase. Designers and implementors should be aware of the fact that until the handshake has been authenticated, active attackers can modify messages and insert, remove, or replace extensions.

4.2.1.Supported Versions

   struct {
       select (Handshake.msg_type) {
           case client_hello:
                ProtocolVersion versions<2..254>;

           case server_hello: /* and HelloRetryRequest */
                ProtocolVersion selected_version;
       };
   } SupportedVersions;

The “supported_versions” extension is used by the client to indicate which versions of TLS it supports and by the server to indicate which version it is using. The extension contains a list of supported versions in preference order, with the most preferred version first. Implementations of this specification MUST send this extension in the ClientHello containing all versions of TLS which they are prepared to negotiate (for this specification, that means minimally 0x0304, but if previous versions of TLS are allowed to be negotiated, they MUST be present as well).

If this extension is not present, servers which are compliant with this specification, and which also support TLS 1.2, MUST negotiate TLS 1.2 or prior as specified in [RFC5246], even if ClientHello.legacy_version is 0x0304 or later. Servers MAY abort the handshake upon receiving a ClientHello with legacy_version 0x0304 or later.

If this extension is present in the ClientHello, servers MUST NOT use the ClientHello.legacy_version value for version negotiation and MUST use only the “supported_versions” extension to determine client preferences. Servers MUST only select a version of TLS present in that extension and MUST ignore any unknown versions that are present in that extension. Note that this mechanism makes it possible to negotiate a version prior to TLS 1.2 if one side supports a sparse range. Implementations of TLS 1.3 which choose to support prior versions of TLS SHOULD support TLS 1.2. Servers MUST be prepared to receive ClientHellos that include this extension but do not include 0x0304 in the list of versions.

A server which negotiates a version of TLS prior to TLS 1.3 MUST set ServerHello.version and MUST NOT send the “supported_versions” extension. A server which negotiates TLS 1.3 MUST respond by sending a “supported_versions” extension containing the selected version value (0x0304). It MUST set the ServerHello.legacy_version field to 0x0303 (TLS 1.2). Clients MUST check for this extension prior to processing the rest of the ServerHello (although they will have to parse the ServerHello in order to read the extension). If this extension is present, clients MUST ignore the ServerHello.legacy_version value and MUST use only the “supported_versions” extension to determine the selected version. If the “supported_versions” extension in the ServerHello contains a version not offered by the client or contains a version prior to TLS 1.3, the client MUST abort the handshake with an “illegal_parameter” alert.

4.2.1.1.Draft Version Indicator

RFC EDITOR: PLEASE REMOVE THIS SECTION

While the eventual version indicator for the RFC version of TLS 1.3 will be 0x0304, implementations of draft versions of this specification SHOULD instead advertise 0x7f00 | draft_version in the ServerHello and HelloRetryRequest “supported_versions” extension. For instance, draft-17 would be encoded as the 0x7f11. This allows pre-RFC implementations to safely negotiate with each other, even if they would otherwise be incompatible.

4.2.2.Cookie

   struct {
       opaque cookie<1..2^16-1>;
   } Cookie;

Cookies serve two primary purposes:

  • Allowing the server to force the client to demonstrate reachability at their apparent network address (thus providing a measure of DoS protection). This is primarily useful for non-connection-oriented transports (see [RFC6347] for an example of this).
  • Allowing the server to offload state to the client, thus allowing it to send a HelloRetryRequest without storing any state. The server can do this by storing the hash of the ClientHello in the HelloRetryRequest cookie (protected with some suitable integrity algorithm).

When sending a HelloRetryRequest, the server MAY provide a “cookie” extension to the client (this is an exception to the usual rule that the only extensions that may be sent are those that appear in the ClientHello). When sending the new ClientHello, the client MUST copy the contents of the extension received in the HelloRetryRequest into a “cookie” extension in the new ClientHello. Clients MUST NOT use cookies in their initial ClientHello in subsequent connections.

When a server is operating statelessly it may receive an unprotected record of type change_cipher_spec between the first and second ClientHello (see Section 5). Since the server is not storing any state this will appear as if it were the first message to be received. Servers operating statelessly MUST ignore these records.

4.2.3.Signature Algorithms

TLS 1.3 provides two extensions for indicating which signature algorithms may be used in digital signatures. The “signature_algorithms_cert” extension applies to signatures in certificates and the “signature_algorithms” extension, which originally appeared in TLS 1.2, applies to signatures in CertificateVerify messages. The keys found in certificates MUST also be of appropriate type for the signature algorithms they are used with. This is a particular issue for RSA keys and PSS signatures, as described below. If no “signature_algorithms_cert” extension is present, then the “signature_algorithms” extension also applies to signatures appearing in certificates. Clients which desire the server to authenticate itself via a certificate MUST send “signature_algorithms”. If a server is authenticating via a certificate and the client has not sent a “signature_algorithms” extension, then the server MUST abort the handshake with a “missing_extension” alert (see Section 9.2).

The “signature_algorithms_cert” extension was added to allow implementations which supported different sets of algorithms for certificates and in TLS itself to clearly signal their capabilities. TLS 1.2 implementations SHOULD also process this extension. Implementations which have the same policy in both cases MAY omit the “signature_algorithms_cert” extension.

The “extension_data” field of these extensions contains a SignatureSchemeList value:

   enum {
       /* RSASSA-PKCS1-v1_5 algorithms */
       rsa_pkcs1_sha256(0x0401),
       rsa_pkcs1_sha384(0x0501),
       rsa_pkcs1_sha512(0x0601),

       /* ECDSA algorithms */
       ecdsa_secp256r1_sha256(0x0403),
       ecdsa_secp384r1_sha384(0x0503),
       ecdsa_secp521r1_sha512(0x0603),

       /* RSASSA-PSS algorithms with public key OID rsaEncryption */
       rsa_pss_rsae_sha256(0x0804),
       rsa_pss_rsae_sha384(0x0805),
       rsa_pss_rsae_sha512(0x0806),

       /* EdDSA algorithms */
       ed25519(0x0807),
       ed448(0x0808),

       /* RSASSA-PSS algorithms with public key OID RSASSA-PSS */
       rsa_pss_pss_sha256(0x0809),
       rsa_pss_pss_sha384(0x080a),
       rsa_pss_pss_sha512(0x080b),

       /* Legacy algorithms */
       rsa_pkcs1_sha1(0x0201),
       ecdsa_sha1(0x0203),

       /* Reserved Code Points */
       private_use(0xFE00..0xFFFF),
       (0xFFFF)
   } SignatureScheme;

   struct {
       SignatureScheme supported_signature_algorithms<2..2^16-2>;
   } SignatureSchemeList;

Note: This enum is named “SignatureScheme” because there is already a “SignatureAlgorithm” type in TLS 1.2, which this replaces. We use the term “signature algorithm” throughout the text.

Each SignatureScheme value lists a single signature algorithm that the client is willing to verify. The values are indicated in descending order of preference. Note that a signature algorithm takes as input an arbitrary-length message, rather than a digest. Algorithms which traditionally act on a digest should be defined in TLS to first hash the input with a specified hash algorithm and then proceed as usual. The code point groups listed above have the following meanings:

RSASSA-PKCS1-v1_5 algorithms
Indicates a signature algorithm using RSASSA-PKCS1-v1_5 [RFC8017] with the corresponding hash algorithm as defined in [SHS]. These values refer solely to signatures which appear in certificates (see Section 4.4.2.2) and are not defined for use in signed TLS handshake messages, although they MAY appear in “signature_algorithms” and “signature_algorithms_cert” for backward compatibility with TLS 1.2,
ECDSA algorithms
Indicates a signature algorithm using ECDSA [ECDSA], the corresponding curve as defined in ANSI X9.62 [X962] and FIPS 186-4 [DSS], and the corresponding hash algorithm as defined in [SHS]. The signature is represented as a DER-encoded [X690] ECDSA-Sig-Value structure.
RSASSA-PSS RSAE algorithms
Indicates a signature algorithm using RSASSA-PSS [RFC8017] with mask generation function 1. The digest used in the mask generation function and the digest being signed are both the corresponding hash algorithm as defined in [SHS]. The length of the salt MUST be equal to the length of the output of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the rsaEncryption OID [RFC5280].
EdDSA algorithms
Indicates a signature algorithm using EdDSA as defined in [RFC8032] or its successors. Note that these correspond to the “PureEdDSA” algorithms and not the “prehash” variants.
RSASSA-PSS PSS algorithms
Indicates a signature algorithm using RSASSA-PSS [RFC8017] with mask generation function 1. The digest used in the mask generation function and the digest being signed are both the corresponding hash algorithm as defined in [SHS]. The length of the salt MUST be equal to the length of the digest algorithm. If the public key is carried in an X.509 certificate, it MUST use the RSASSA-PSS OID [RFC5756]. When used in certificate signatures, the algorithm parameters MUST be DER encoded. If the corresponding public key’s parameters are present, then the parameters in the signature MUST be identical to those in the public key.
Legacy algorithms
Indicates algorithms which are being deprecated because they use algorithms with known weaknesses, specifically SHA-1 which is used in this context with either with RSA using RSASSA-PKCS1-v1_5 or ECDSA. These values refer solely to signatures which appear in certificates (see Section 4.4.2.2) and are not defined for use in signed TLS handshake messages, although they MAY appear in “signature_algorithms” and “signature_algorithms_cert” for backward compatibility with TLS 1.2, Endpoints SHOULD NOT negotiate these algorithms but are permitted to do so solely for backward compatibility. Clients offering these values MUST list them as the lowest priority (listed after all other algorithms in SignatureSchemeList). TLS 1.3 servers MUST NOT offer a SHA-1 signed certificate unless no valid certificate chain can be produced without it (see Section 4.4.2.2).

The signatures on certificates that are self-signed or certificates that are trust anchors are not validated since they begin a certification path (see [RFC5280], Section 3.2). A certificate that begins a certification path MAY use a signature algorithm that is not advertised as being supported in the “signature_algorithms” extension.

Note that TLS 1.2 defines this extension differently. TLS 1.3 implementations willing to negotiate TLS 1.2 MUST behave in accordance with the requirements of [RFC5246] when negotiating that version. In particular:

  • TLS 1.2 ClientHellos MAY omit this extension.
  • In TLS 1.2, the extension contained hash/signature pairs. The pairs are encoded in two octets, so SignatureScheme values have been allocated to align with TLS 1.2’s encoding. Some legacy pairs are left unallocated. These algorithms are deprecated as of TLS 1.3. They MUST NOT be offered or negotiated by any implementation. In particular, MD5 [SLOTH], SHA-224, and DSA MUST NOT be used.
  • ECDSA signature schemes align with TLS 1.2’s ECDSA hash/signature pairs. However, the old semantics did not constrain the signing curve. If TLS 1.2 is negotiated, implementations MUST be prepared to accept a signature that uses any curve that they advertised in the “supported_groups” extension.
  • Implementations that advertise support for RSASSA-PSS (which is mandatory in TLS 1.3), MUST be prepared to accept a signature using that scheme even when TLS 1.2 is negotiated. In TLS 1.2, RSASSA-PSS is used with RSA cipher suites.

4.2.4.Certificate Authorities

The “certificate_authorities” extension is used to indicate the certificate authorities which an endpoint supports and which SHOULD be used by the receiving endpoint to guide certificate selection.

The body of the “certificate_authorities” extension consists of a CertificateAuthoritiesExtension structure.

   opaque DistinguishedName<1..2^16-1>;

   struct {
       DistinguishedName authorities<3..2^16-1>;
   } CertificateAuthoritiesExtension;
authorities
A list of the distinguished names [X501] of acceptable certificate authorities, represented in DER-encoded [X690] format. These distinguished names specify a desired distinguished name for trust anchor or subordinate CA; thus, this message can be used to describe known trust anchors as well as a desired authorization space.

The client MAY send the “certificate_authorities” extension in the ClientHello message. The server MAY send it in the CertificateRequest message.

The “trusted_ca_keys” extension, which serves a similar purpose [RFC6066], but is more complicated, is not used in TLS 1.3 (although it may appear in ClientHello messages from clients which are offering prior versions of TLS).

4.2.5.OID Filters

The “oid_filters” extension allows servers to provide a set of OID/value pairs which it would like the client’s certificate to match. This extension, if provided by the server, MUST only be sent in the CertificateRequest message.

   struct {
       opaque certificate_extension_oid<1..2^8-1>;
       opaque certificate_extension_values<0..2^16-1>;
   } OIDFilter;

   struct {
       OIDFilter filters<0..2^16-1>;
   } OIDFilterExtension;
filters
A list of certificate extension OIDs [RFC5280] with their allowed value(s) and represented in DER-encoded [X690] format. Some certificate extension OIDs allow multiple values (e.g., Extended Key Usage). If the server has included a non-empty filters list, the client certificate included in the response MUST contain all of the specified extension OIDs that the client recognizes. For each extension OID recognized by the client, all of the specified values MUST be present in the client certificate (but the certificate MAY have other values as well). However, the client MUST ignore and skip any unrecognized certificate extension OIDs. If the client ignored some of the required certificate extension OIDs and supplied a certificate that does not satisfy the request, the server MAY at its discretion either continue the connection without client authentication, or abort the handshake with an “unsupported_certificate” alert. Any given OID MUST NOT appear more than once in the filters list.

PKIX RFCs define a variety of certificate extension OIDs and their corresponding value types. Depending on the type, matching certificate extension values are not necessarily bitwise-equal. It is expected that TLS implementations will rely on their PKI libraries to perform certificate selection using certificate extension OIDs.

This document defines matching rules for two standard certificate extensions defined in [RFC5280]:

  • The Key Usage extension in a certificate matches the request when all key usage bits asserted in the request are also asserted in the Key Usage certificate extension.
  • The Extended Key Usage extension in a certificate matches the request when all key purpose OIDs present in the request are also found in the Extended Key Usage certificate extension. The special anyExtendedKeyUsage OID MUST NOT be used in the request.

Separate specifications may define matching rules for other certificate extensions.

4.2.6.Post-Handshake Client Authentication

The “post_handshake_auth” extension is used to indicate that a client is willing to perform post-handshake authentication (Section 4.6.2). Servers MUST NOT send a post-handshake CertificateRequest to clients which do not offer this extension. Servers MUST NOT send this extension.

   struct {} PostHandshakeAuth;

The “extension_data” field of the “post_handshake_auth” extension is zero length.

4.2.7.Negotiated Groups

When sent by the client, the “supported_groups” extension indicates the named groups which the client supports for key exchange, ordered from most preferred to least preferred.

Note: In versions of TLS prior to TLS 1.3, this extension was named “elliptic_curves” and only contained elliptic curve groups. See [RFC4492] and [RFC7919]. This extension was also used to negotiate ECDSA curves. Signature algorithms are now negotiated independently (see Section 4.2.3).

The “extension_data” field of this extension contains a “NamedGroupList” value:

   enum {

       /* Elliptic Curve Groups (ECDHE) */
       secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019),
       x25519(0x001D), x448(0x001E),

       /* Finite Field Groups (DHE) */
       ffdhe2048(0x0100), ffdhe3072(0x0101), ffdhe4096(0x0102),
       ffdhe6144(0x0103), ffdhe8192(0x0104),

       /* Reserved Code Points */
       ffdhe_private_use(0x01FC..0x01FF),
       ecdhe_private_use(0xFE00..0xFEFF),
       (0xFFFF)
   } NamedGroup;

   struct {
       NamedGroup named_group_list<2..2^16-1>;
   } NamedGroupList;
Elliptic Curve Groups (ECDHE)
Indicates support for the corresponding named curve, defined either in FIPS 186-4 [DSS] or in [RFC7748]. Values 0xFE00 through 0xFEFF are reserved for private use.
Finite Field Groups (DHE)
Indicates support of the corresponding finite field group, defined in [RFC7919]. Values 0x01FC through 0x01FF are reserved for private use.

Items in named_group_list are ordered according to the client’s preferences (most preferred choice first).

As of TLS 1.3, servers are permitted to send the “supported_groups” extension to the client. Clients MUST NOT act upon any information found in “supported_groups” prior to successful completion of the handshake but MAY use the information learned from a successfully completed handshake to change what groups they use in their “key_share” extension in subsequent connections. If the server has a group it prefers to the ones in the “key_share” extension but is still willing to accept the ClientHello, it SHOULD send “supported_groups” to update the client’s view of its preferences; this extension SHOULD contain all groups the server supports, regardless of whether they are currently supported by the client.

4.2.8.Key Share

The “key_share” extension contains the endpoint’s cryptographic parameters.

Clients MAY send an empty client_shares vector in order to request group selection from the server at the cost of an additional round trip. (see Section 4.1.4)

   struct {
       NamedGroup group;
       opaque key_exchange<1..2^16-1>;
   } KeyShareEntry;
group
The named group for the key being exchanged.
key_exchange
Key exchange information. The contents of this field are determined by the specified group and its corresponding definition. Finite Field Diffie-Hellman [DH] parameters are described in Section 4.2.8.1; Elliptic Curve Diffie-Hellman parameters are described in Section 4.2.8.2.

In the ClientHello message, the “extension_data” field of this extension contains a “KeyShareClientHello” value:

   struct {
       KeyShareEntry client_shares<0..2^16-1>;
   } KeyShareClientHello;
client_shares
A list of offered KeyShareEntry values in descending order of client preference.

This vector MAY be empty if the client is requesting a HelloRetryRequest. Each KeyShareEntry value MUST correspond to a group offered in the “supported_groups” extension and MUST appear in the same order. However, the values MAY be a non-contiguous subset of the “supported_groups” extension and MAY omit the most preferred groups. Such a situation could arise if the most preferred groups are new and unlikely to be supported in enough places to make pregenerating key shares for them efficient.

Clients can offer as many KeyShareEntry values as the number of supported groups it is offering, each representing a single set of key exchange parameters. For instance, a client might offer shares for several elliptic curves or multiple FFDHE groups. The key_exchange values for each KeyShareEntry MUST be generated independently. Clients MUST NOT offer multiple KeyShareEntry values for the same group. Clients MUST NOT offer any KeyShareEntry values for groups not listed in the client’s “supported_groups” extension. Servers MAY check for violations of these rules and abort the handshake with an “illegal_parameter” alert if one is violated.

In a HelloRetryRequest message, the “extension_data” field of this extension contains a KeyShareHelloRetryRequest value:

   struct {
       NamedGroup selected_group;
   } KeyShareHelloRetryRequest;
selected_group
The mutually supported group the server intends to negotiate and is requesting a retried ClientHello/KeyShare for.

Upon receipt of this extension in a HelloRetryRequest, the client MUST verify that (1) the selected_group field corresponds to a group which was provided in the “supported_groups” extension in the original ClientHello; and (2) the selected_group field does not correspond to a group which was provided in the “key_share” extension in the original ClientHello. If either of these checks fails, then the client MUST abort the handshake with an “illegal_parameter” alert. Otherwise, when sending the new ClientHello, the client MUST replace the original “key_share” extension with one containing only a new KeyShareEntry for the group indicated in the selected_group field of the triggering HelloRetryRequest.

In a ServerHello message, the “extension_data” field of this extension contains a KeyShareServerHello value:

   struct {
       KeyShareEntry server_share;
   } KeyShareServerHello;
server_share
A single KeyShareEntry value that is in the same group as one of the client’s shares.

If using (EC)DHE key establishment, servers offer exactly one KeyShareEntry in the ServerHello. This value MUST be in the same group as the KeyShareEntry value offered by the client that the server has selected for the negotiated key exchange. Servers MUST NOT send a KeyShareEntry for any group not indicated in the “supported_groups” extension and MUST NOT send a KeyShareEntry when using the “psk_ke” PskKeyExchangeMode. If using (EC)DHE key establishment, and a HelloRetryRequest containing a “key_share” extension was received by the client, the client MUST verify that the selected NamedGroup in the ServerHello is the same as that in the HelloRetryRequest. If this check fails, the client MUST abort the handshake with an “illegal_parameter” alert.

4.2.8.1.Diffie-Hellman Parameters

Diffie-Hellman [DH] parameters for both clients and servers are encoded in the opaque key_exchange field of a KeyShareEntry in a KeyShare structure. The opaque value contains the Diffie-Hellman public value (Y = g^X mod p) for the specified group (see [RFC7919] for group definitions) encoded as a big-endian integer and padded to the left with zeros to the size of p in bytes.

Note: For a given Diffie-Hellman group, the padding results in all public keys having the same length.

Peers MUST validate each other’s public key Y by ensuring that 1 < Y < p-1. This check ensures that the remote peer is properly behaved and isn’t forcing the local system into a small subgroup.

4.2.8.2.ECDHE Parameters

ECDHE parameters for both clients and servers are encoded in the opaque key_exchange field of a KeyShareEntry in a KeyShare structure.

For secp256r1, secp384r1 and secp521r1, the contents are the serialized value of the following struct:

   struct {
       uint8 legacy_form = 4;
       opaque X[coordinate_length];
       opaque Y[coordinate_length];
   } UncompressedPointRepresentation;

X and Y respectively are the binary representations of the x and y values in network byte order. There are no internal length markers, so each number representation occupies as many octets as implied by the curve parameters. For P-256 this means that each of X and Y use 32 octets, padded on the left by zeros if necessary. For P-384 they take 48 octets each, and for P-521 they take 66 octets each.

For the curves secp256r1, secp384r1 and secp521r1, peers MUST validate each other’s public value Q by ensuring that the point is a valid point on the elliptic curve. The appropriate validation procedures are defined in Section 4.3.7 of [X962] and alternatively in Section 5.6.2.3 of [KEYAGREEMENT]. This process consists of three steps: (1) verify that Q is not the point at infinity (O), (2) verify that for Q = (x, y) both integers x and y are in the correct interval, (3) ensure that (x, y) is a correct solution to the elliptic curve equation. For these curves, implementers do not need to verify membership in the correct subgroup.

For X25519 and X448, the contents of the public value are the byte string inputs and outputs of the corresponding functions defined in [RFC7748], 32 bytes for X25519 and 56 bytes for X448.

Note: Versions of TLS prior to 1.3 permitted point format negotiation; TLS 1.3 removes this feature in favor of a single point format for each curve.

4.2.9.Pre-Shared Key Exchange Modes

In order to use PSKs, clients MUST also send a “psk_key_exchange_modes” extension. The semantics of this extension are that the client only supports the use of PSKs with these modes, which restricts both the use of PSKs offered in this ClientHello and those which the server might supply via NewSessionTicket.

A client MUST provide a “psk_key_exchange_modes” extension if it offers a “pre_shared_key” extension. If clients offer “pre_shared_key” without a “psk_key_exchange_modes” extension, servers MUST abort the handshake. Servers MUST NOT select a key exchange mode that is not listed by the client. This extension also restricts the modes for use with PSK resumption; servers SHOULD NOT send NewSessionTicket with tickets that are not compatible with the advertised modes; however, if a server does so, the impact will just be that the client’s attempts at resumption fail.

The server MUST NOT send a “psk_key_exchange_modes” extension.

   enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

   struct {
       PskKeyExchangeMode ke_modes<1..255>;
   } PskKeyExchangeModes;
psk_ke
PSK-only key establishment. In this mode, the server MUST NOT supply a “key_share” value.
psk_dhe_ke
PSK with (EC)DHE key establishment. In this mode, the client and server MUST supply “key_share” values as described in Section 4.2.8.

Any future values that are allocated must ensure that the transmitted protocol messages unambiguously identify which mode was selected by the server; at present, this is indicated by the presence of the “key_share” in the ServerHello.

4.2.10.Early Data Indication

When a PSK is used and early data is allowed for that PSK, the client can send application data in its first flight of messages. If the client opts to do so, it MUST supply both the “early_data” extension as well as the “pre_shared_key” extension.

The “extension_data” field of this extension contains an “EarlyDataIndication” value.

   struct {} Empty;

   struct {
       select (Handshake.msg_type) {
           case new_session_ticket:   uint32 max_early_data_size;
           case client_hello:         Empty;
           case encrypted_extensions: Empty;
       };
   } EarlyDataIndication;

See Section 4.6.1 for the use of the max_early_data_size field.

The parameters for the 0-RTT data (version, symmetric cipher suite, ALPN protocol, etc.) are those associated with the PSK in use. For externally provisioned PSKs, the associated values are those provisioned along with the key. For PSKs established via a NewSessionTicket message, the associated values are those which were negotiated in the connection which established the PSK. The PSK used to encrypt the early data MUST be the first PSK listed in the client’s “pre_shared_key” extension.

For PSKs provisioned via NewSessionTicket, a server MUST validate that the ticket age for the selected PSK identity (computed by subtracting ticket_age_add from PskIdentity.obfuscated_ticket_age modulo 2^32) is within a small tolerance of the time since the ticket was issued (see Section 8). If it is not, the server SHOULD proceed with the handshake but reject 0-RTT, and SHOULD NOT take any other action that assumes that this ClientHello is fresh.

0-RTT messages sent in the first flight have the same (encrypted) content types as messages of the same type sent in other flights (handshake and application_data) but are protected under different keys. After receiving the server’s Finished message, if the server has accepted early data, an EndOfEarlyData message will be sent to indicate the key change. This message will be encrypted with the 0-RTT traffic keys.

A server which receives an “early_data” extension MUST behave in one of three ways:

  • Ignore the extension and return a regular 1-RTT response. The server then skips past early data by attempting to deprotect received records using the handshake traffic key, discarding records which fail deprotection (up to the configured max_early_data_size). Once a record is deprotected successfully, it is treated as the start of the client’s second flight and the the server proceeds as with an ordinary 1-RTT handshake.
  • Request that the client send another ClientHello by responding with a HelloRetryRequest. A client MUST NOT include the “early_data” extension in its followup ClientHello. The server then ignores early data by skipping all records with external content type of “application_data” (indicating that they are encrypted), up to the configured max_early_data_size.
  • Return its own “early_data” extension in EncryptedExtensions, indicating that it intends to process the early data. It is not possible for the server to accept only a subset of the early data messages. Even though the server sends a message accepting early data, the actual early data itself may already be in flight by the time the server generates this message.

In order to accept early data, the server MUST have accepted a PSK cipher suite and selected the first key offered in the client’s “pre_shared_key” extension. In addition, it MUST verify that the following values are the same as those associated with the selected PSK:

  • The TLS version number
  • The selected cipher suite
  • The selected ALPN [RFC7301] protocol, if any

These requirements are a superset of those needed to perform a 1-RTT handshake using the PSK in question. For externally established PSKs, the associated values are those provisioned along with the key. For PSKs established via a NewSessionTicket message, the associated values are those negotiated in the connection during which the ticket was established.

Future extensions MUST define their interaction with 0-RTT.

If any of these checks fail, the server MUST NOT respond with the extension and must discard all the first flight data using one of the first two mechanisms listed above (thus falling back to 1-RTT or 2-RTT). If the client attempts a 0-RTT handshake but the server rejects it, the server will generally not have the 0-RTT record protection keys and must instead use trial decryption (either with the 1-RTT handshake keys or by looking for a cleartext ClientHello in the case of HelloRetryRequest) to find the first non-0-RTT message.

If the server chooses to accept the “early_data” extension, then it MUST comply with the same error handling requirements specified for all records when processing early data records. Specifically, if the server fails to decrypt a 0-RTT record following an accepted “early_data” extension it MUST terminate the connection with a “bad_record_mac” alert as per Section 5.2.

If the server rejects the “early_data” extension, the client application MAY opt to retransmit the application data previously sent in early data once the handshake has been completed. Note that automatic re-transmission of early data could result in assumptions about the status of the connection being incorrect. For instance, when the negotiated connection selects a different ALPN protocol from what was used for the early data, an application might need to construct different messages. Similarly, if early data assumes anything about the connection state, it might be sent in error after the handshake completes.

A TLS implementation SHOULD NOT automatically re-send early data; applications are in a better position to decide when re-transmission is appropriate. A TLS implementation MUST NOT automatically re-send early data unless the negotiated connection selects the same ALPN protocol.

4.2.11.Pre-Shared Key Extension

The “pre_shared_key” extension is used to negotiate the identity of the pre-shared key to be used with a given handshake in association with PSK key establishment.

The “extension_data” field of this extension contains a “PreSharedKeyExtension” value:

   struct {
       opaque identity<1..2^16-1>;
       uint32 obfuscated_ticket_age;
   } PskIdentity;

   opaque PskBinderEntry<32..255>;

   struct {
       PskIdentity identities<7..2^16-1>;
       PskBinderEntry binders<33..2^16-1>;
   } OfferedPsks;

   struct {
       select (Handshake.msg_type) {
           case client_hello: OfferedPsks;
           case server_hello: uint16 selected_identity;
       };
   } PreSharedKeyExtension;
identity
A label for a key. For instance, a ticket defined in Appendix B.3.4 or a label for a pre-shared key established externally.
obfuscated_ticket_age
An obfuscated version of the age of the key. Section 4.2.11.1 describes how to form this value for identities established via the NewSessionTicket message. For identities established externally an obfuscated_ticket_age of 0 SHOULD be used, and servers MUST ignore the value.
identities
A list of the identities that the client is willing to negotiate with the server. If sent alongside the “early_data” extension (see Section 4.2.10), the first identity is the one used for 0-RTT data.
binders
A series of HMAC values, one for each PSK offered in the “pre_shared_keys” extension and in the same order, computed as described below.
selected_identity
The server’s chosen identity expressed as a (0-based) index into the identities in the client’s list.

Each PSK is associated with a single Hash algorithm. For PSKs established via the ticket mechanism (Section 4.6.1), this is the KDF Hash algorithm on the connection where the ticket was established. For externally established PSKs, the Hash algorithm MUST be set when the PSK is established, or default to SHA-256 if no such algorithm is defined. The server MUST ensure that it selects a compatible PSK (if any) and cipher suite.

In TLS versions prior to TLS 1.3, the Server Name Identification (SNI) value was intended to be associated with the session (Section 3 of [RFC6066]), with the server being required to enforce that the SNI value associated with the session matches the one specified in the resumption handshake. However, in reality the implementations were not consistent on which of two supplied SNI values they would use, leading to the consistency requirement being de-facto enforced by the clients. In TLS 1.3, the SNI value is always explicitly specified in the resumption handshake, and there is no need for the server to associate an SNI value with the ticket. Clients, however, SHOULD store the SNI with the PSK to fulfill the requirements of Section 4.6.1.

Implementor’s note: when session resumption is the primary use case of PSKs the most straightforward way to implement the PSK/cipher suite matching requirements is to negotiate the cipher suite first and then exclude any incompatible PSKs. Any unknown PSKs (e.g., they are not in the PSK database or are encrypted with an unknown key) SHOULD simply be ignored. If no acceptable PSKs are found, the server SHOULD perform a non-PSK handshake if possible. If backwards compatibility is important, client provided, externally established PSKs SHOULD influence cipher suite selection.

Prior to accepting PSK key establishment, the server MUST validate the corresponding binder value (see Section 4.2.11.2 below). If this value is not present or does not validate, the server MUST abort the handshake. Servers SHOULD NOT attempt to validate multiple binders; rather they SHOULD select a single PSK and validate solely the binder that corresponds to that PSK. See [Section 8.2] and [Appendix E.6] for the security rationale for this requirement. In order to accept PSK key establishment, the server sends a “pre_shared_key” extension indicating the selected identity.

Clients MUST verify that the server’s selected_identity is within the range supplied by the client, that the server selected a cipher suite indicating a Hash associated with the PSK and that a server “key_share” extension is present if required by the ClientHello “psk_key_exchange_modes”. If these values are not consistent the client MUST abort the handshake with an “illegal_parameter” alert.

If the server supplies an “early_data” extension, the client MUST verify that the server’s selected_identity is 0. If any other value is returned, the client MUST abort the handshake with an “illegal_parameter” alert.

The “pre_shared_key” extension MUST be the last extension in the ClientHello (this facilitates implementation as described below). Servers MUST check that it is the last extension and otherwise fail the handshake with an “illegal_parameter” alert.

4.2.11.1.Ticket Age

The client’s view of the age of a ticket is the time since the receipt of the NewSessionTicket message. Clients MUST NOT attempt to use tickets which have ages greater than the “ticket_lifetime” value which was provided with the ticket. The “obfuscated_ticket_age” field of each PskIdentity contains an obfuscated version of the ticket age formed by taking the age in milliseconds and adding the “ticket_age_add” value that was included with the ticket (see Section 4.6.1), modulo 2^32. This addition prevents passive observers from correlating connections unless tickets are reused. Note that the “ticket_lifetime” field in the NewSessionTicket message is in seconds but the “obfuscated_ticket_age” is in milliseconds. Because ticket lifetimes are restricted to a week, 32 bits is enough to represent any plausible age, even in milliseconds.

4.2.11.2.PSK Binder

The PSK binder value forms a binding between a PSK and the current handshake, as well as a binding between the handshake in which the PSK was generated (if via a NewSessionTicket message) and the current handshake. Each entry in the binders list is computed as an HMAC over a transcript hash (see Section 4.4.1) containing a partial ClientHello up to and including the PreSharedKeyExtension.identities field. That is, it includes all of the ClientHello but not the binders list itself. The length fields for the message (including the overall length, the length of the extensions block, and the length of the “pre_shared_key” extension) are all set as if binders of the correct lengths were present.

The PskBinderEntry is computed in the same way as the Finished message (Section 4.4.4) but with the BaseKey being the binder_key derived via the key schedule from the corresponding PSK which is being offered (see Section 7.1).

If the handshake includes a HelloRetryRequest, the initial ClientHello and HelloRetryRequest are included in the transcript along with the new ClientHello. For instance, if the client sends ClientHello1, its binder will be computed over:

   Transcript-Hash(Truncate(ClientHello1))

Where Truncate() removes the binders list from the ClientHello.

If the server responds with HelloRetryRequest, and the client then sends ClientHello2, its binder will be computed over:

   Transcript-Hash(ClientHello1,
                   HelloRetryRequest,
                   Truncate(ClientHello2))

The full ClientHello1/ClientHello2 is included in all other handshake hash computations. Note that in the first flight, Truncate(ClientHello1) is hashed directly, but in the second flight, ClientHello1 is hashed and then reinjected as a “message_hash” message, as described in Section 4.4.1.

4.2.11.3.Processing Order

Clients are permitted to “stream” 0-RTT data until they receive the server’s Finished, only then sending the EndOfEarlyData message, followed by the rest of the handshake. In order to avoid deadlocks, when accepting “early_data”, servers MUST process the client’s ClientHello and then immediately send their flight of messages, rather than waiting for the client’s EndOfEarlyData message before sending its ServerHello.

The next two messages from the server, EncryptedExtensions and CertificateRequest, contain information from the server that determines the rest of the handshake. These messages are encrypted with keys derived from the server_handshake_traffic_secret.

4.3.1.Encrypted Extensions

In all handshakes, the server MUST send the EncryptedExtensions message immediately after the ServerHello message. This is the first message that is encrypted under keys derived from the server_handshake_traffic_secret.

The EncryptedExtensions message contains extensions that can be protected, i.e., any which are not needed to establish the cryptographic context, but which are not associated with individual certificates. The client MUST check EncryptedExtensions for the presence of any forbidden extensions and if any are found MUST abort the handshake with an “illegal_parameter” alert.

Structure of this message:

   struct {
       Extension extensions<0..2^16-1>;
   } EncryptedExtensions;
extensions
A list of extensions. For more information, see the table in Section 4.2.

4.3.2.Certificate Request

A server which is authenticating with a certificate MAY optionally request a certificate from the client. This message, if sent, MUST follow EncryptedExtensions.

Structure of this message:

   struct {
       opaque certificate_request_context<0..2^8-1>;
       Extension extensions<2..2^16-1>;
   } CertificateRequest;
certificate_request_context
An opaque string which identifies the certificate request and which will be echoed in the client’s Certificate message. The certificate_request_context MUST be unique within the scope of this connection (thus preventing replay of client CertificateVerify messages). This field SHALL be zero length unless used for the post-handshake authentication exchanges described in Section 4.6.2. When requesting post-handshake authentication, the server SHOULD make the context unpredictable to the client (e.g., by randomly generating it) in order to prevent an attacker who has temporary access to the client’s private key from pre-computing valid CertificateVerify messages.
extensions
A set of extensions describing the parameters of the certificate being requested. The “signature_algorithms” extension MUST be specified, and other extensions may optionally be included if defined for this message. Clients MUST ignore unrecognized extensions.

In prior versions of TLS, the CertificateRequest message carried a list of signature algorithms and certificate authorities which the server would accept. In TLS 1.3 the former is expressed by sending the “signature_algorithms” and optionally “signature_algorithms_cert” extensions. The latter is expressed by sending the “certificate_authorities” extension (see Section 4.2.4).

Servers which are authenticating with a PSK MUST NOT send the CertificateRequest message in the main handshake, though they MAY send it in post-handshake authentication (see Section 4.6.2) provided that the client has sent the “post_handshake_auth” extension (see Section 4.2.6).

As discussed in Section 2, TLS generally uses a common set of messages for authentication, key confirmation, and handshake integrity: Certificate, CertificateVerify, and Finished. (The PreSharedKey binders also perform key confirmation, in a similar fashion.) These three messages are always sent as the last messages in their handshake flight. The Certificate and CertificateVerify messages are only sent under certain circumstances, as defined below. The Finished message is always sent as part of the Authentication block. These messages are encrypted under keys derived from [sender]_handshake_traffic_secret.

The computations for the Authentication messages all uniformly take the following inputs:

  • The certificate and signing key to be used.
  • A Handshake Context consisting of the set of messages to be included in the transcript hash.
  • A base key to be used to compute a MAC key.

Based on these inputs, the messages then contain:

Certificate
The certificate to be used for authentication, and any supporting certificates in the chain. Note that certificate-based client authentication is not available in PSK (including 0-RTT) flows.
CertificateVerify
A signature over the value Transcript-Hash(Handshake Context, Certificate)
Finished
A MAC over the value Transcript-Hash(Handshake Context, Certificate, CertificateVerify) using a MAC key derived from the base key.

The following table defines the Handshake Context and MAC Base Key for each scenario:

ModeHandshake ContextBase Key
ServerClientHello … later of EncryptedExtensions/CertificateRequestserver_handshake_traffic_secret
ClientClientHello … later of server Finished/EndOfEarlyDataclient_handshake_traffic_secret
Post-HandshakeClientHello … client Finished + CertificateRequestclient_application_traffic_secret_N

4.4.1.The Transcript Hash

Many of the cryptographic computations in TLS make use of a transcript hash. This value is computed by hashing the concatenation of each included handshake message, including the handshake message header carrying the handshake message type and length fields, but not including record layer headers. I.e.,

 Transcript-Hash(M1, M2, ... Mn) = Hash(M1 || M2 || ... || Mn)

As an exception to this general rule, when the server responds to a ClientHello with a HelloRetryRequest, the value of ClientHello1 is replaced with a special synthetic handshake message of handshake type “message_hash” containing Hash(ClientHello1). I.e.,

 Transcript-Hash(ClientHello1, HelloRetryRequest, ... Mn) =
     Hash(message_hash ||        /* Handshake type */
          00 00 Hash.length ||   /* Handshake message length (bytes) */
          Hash(ClientHello1) ||  /* Hash of ClientHello1 */
          HelloRetryRequest || ... || Mn)

The reason for this construction is to allow the server to do a stateless HelloRetryRequest by storing just the hash of ClientHello1 in the cookie, rather than requiring it to export the entire intermediate hash state (see Section 4.2.2).

For concreteness, the transcript hash is always taken from the following sequence of handshake messages, starting at the first ClientHello and including only those messages that were sent: ClientHello, HelloRetryRequest, ClientHello, ServerHello, EncryptedExtensions, server CertificateRequest, server Certificate, server CertificateVerify, server Finished, EndOfEarlyData, client Certificate, client CertificateVerify, client Finished.

In general, implementations can implement the transcript by keeping a running transcript hash value based on the negotiated hash. Note, however, that subsequent post-handshake authentications do not include each other, just the messages through the end of the main handshake.

4.4.2.Certificate

This message conveys the endpoint’s certificate chain to the peer.

The server MUST send a Certificate message whenever the agreed-upon key exchange method uses certificates for authentication (this includes all key exchange methods defined in this document except PSK).

The client MUST send a Certificate message if and only if the server has requested client authentication via a CertificateRequest message (Section 4.3.2). If the server requests client authentication but no suitable certificate is available, the client MUST send a Certificate message containing no certificates (i.e., with the “certificate_list” field having length 0). A Finished message MUST be sent regardless of whether the Certificate message is empty.

Structure of this message:

   /* Managed by IANA */
   enum {
       X509(0),
       RawPublicKey(2),
       (255)
   } CertificateType;

   struct {
       select (certificate_type) {
           case RawPublicKey:
             /* From RFC 7250 ASN.1_subjectPublicKeyInfo */
             opaque ASN1_subjectPublicKeyInfo<1..2^24-1>;

           case X509:
             opaque cert_data<1..2^24-1>;
       };
       Extension extensions<0..2^16-1>;
   } CertificateEntry;

   struct {
       opaque certificate_request_context<0..2^8-1>;
       CertificateEntry certificate_list<0..2^24-1>;
   } Certificate;
certificate_request_context
If this message is in response to a CertificateRequest, the value of certificate_request_context in that message. Otherwise (in the case of server authentication), this field SHALL be zero length.
certificate_list
This is a sequence (chain) of CertificateEntry structures, each containing a single certificate and set of extensions.
extensions:
A set of extension values for the CertificateEntry. The “Extension” format is defined in Section 4.2. Valid extensions for server certificates at present include OCSP Status extension ([RFC6066]) and SignedCertificateTimestamps ([RFC6962]); future extensions may be defined for this message as well. Extensions in the Certificate message from the server MUST correspond to ones from the ClientHello message. Extensions in the Certificate from the client MUST correspond with extensions in the CertificateRequest message from the server. If an extension applies to the entire chain, it SHOULD be included in the first CertificateEntry.

If the corresponding certificate type extension (“server_certificate_type” or “client_certificate_type”) was not negotiated in Encrypted Extensions, or the X.509 certificate type was negotiated, then each CertificateEntry contains a DER-encoded X.509 certificate. The sender’s certificate MUST come in the first CertificateEntry in the list. Each following certificate SHOULD directly certify the one immediately preceding it. Because certificate validation requires that trust anchors be distributed independently, a certificate that specifies a trust anchor MAY be omitted from the chain, provided that supported peers are known to possess any omitted certificates.

Note: Prior to TLS 1.3, “certificate_list” ordering required each certificate to certify the one immediately preceding it; however, some implementations allowed some flexibility. Servers sometimes send both a current and deprecated intermediate for transitional purposes, and others are simply configured incorrectly, but these cases can nonetheless be validated properly. For maximum compatibility, all implementations SHOULD be prepared to handle potentially extraneous certificates and arbitrary orderings from any TLS version, with the exception of the end-entity certificate which MUST be first.

If the RawPublicKey certificate type was negotiated, then the certificate_list MUST contain no more than one CertificateEntry, which contains an ASN1_subjectPublicKeyInfo value as defined in [RFC7250], Section 3.

The OpenPGP certificate type [RFC6091] MUST NOT be used with TLS 1.3.

The server’s certificate_list MUST always be non-empty. A client will send an empty certificate_list if it does not have an appropriate certificate to send in response to the server’s authentication request.

4.4.2.1.OCSP Status and SCT Extensions

[RFC6066] and [RFC6961] provide extensions to negotiate the server sending OCSP responses to the client. In TLS 1.2 and below, the server replies with an empty extension to indicate negotiation of this extension and the OCSP information is carried in a CertificateStatus message. In TLS 1.3, the server’s OCSP information is carried in an extension in the CertificateEntry containing the associated certificate. Specifically: The body of the “status_request” extension from the server MUST be a CertificateStatus structure as defined in [RFC6066], which is interpreted as defined in [RFC6960].

Note: status_request_v2 extension ([RFC6961]) is deprecated. TLS 1.3 servers MUST NOT act upon its presence or information in it when processing Client Hello, in particular they MUST NOT send the status_request_v2 extension in the Encrypted Extensions, Certificate Request or the Certificate messages. TLS 1.3 servers MUST be able to process Client Hello messages that include it, as it MAY be sent by clients that wish to use it in earlier protocol versions.

A server MAY request that a client present an OCSP response with its certificate by sending an empty “status_request” extension in its CertificateRequest message. If the client opts to send an OCSP response, the body of its “status_request” extension MUST be a CertificateStatus structure as defined in [RFC6066].

Similarly, [RFC6962] provides a mechanism for a server to send a Signed Certificate Timestamp (SCT) as an extension in the ServerHello in TLS 1.2 and below. In TLS 1.3, the server’s SCT information is carried in an extension in CertificateEntry.

4.4.2.2.Server Certificate Selection

The following rules apply to the certificates sent by the server:

  • The certificate type MUST be X.509v3 [RFC5280], unless explicitly negotiated otherwise (e.g., [RFC7250]).
  • The server’s end-entity certificate’s public key (and associated restrictions) MUST be compatible with the selected authentication algorithm from the client’s “signature_algorithms” extension (currently RSA, ECDSA, or EdDSA).
  • The certificate MUST allow the key to be used for signing (i.e., the digitalSignature bit MUST be set if the Key Usage extension is present) with a signature scheme indicated in the client’s “signature_algorithms”/”signature_algorithms_cert” extensions (see Section 4.2.3).
  • The “server_name” [RFC6066] and “certificate_authorities” extensions are used to guide certificate selection. As servers MAY require the presence of the “server_name” extension, clients SHOULD send this extension, when applicable.

All certificates provided by the server MUST be signed by a signature algorithm advertised by the client, if it is able to provide such a chain (see Section 4.2.3). Certificates that are self-signed or certificates that are expected to be trust anchors are not validated as part of the chain and therefore MAY be signed with any algorithm.

If the server cannot produce a certificate chain that is signed only via the indicated supported algorithms, then it SHOULD continue the handshake by sending the client a certificate chain of its choice that may include algorithms that are not known to be supported by the client. This fallback chain SHOULD NOT use the deprecated SHA-1 hash algorithm in general, but MAY do so if the client’s advertisement permits it, and MUST NOT do so otherwise.

If the client cannot construct an acceptable chain using the provided certificates and decides to abort the handshake, then it MUST abort the handshake with an appropriate certificate-related alert (by default, “unsupported_certificate”; see Section 6.2 for more).

If the server has multiple certificates, it chooses one of them based on the above-mentioned criteria (in addition to other criteria, such as transport layer endpoint, local configuration and preferences).

4.4.2.3.Client Certificate Selection

The following rules apply to certificates sent by the client:

  • The certificate type MUST be X.509v3 [RFC5280], unless explicitly negotiated otherwise (e.g., [RFC7250]).
  • If the “certificate_authorities” extension in the CertificateRequest message was present, at least one of the certificates in the certificate chain SHOULD be issued by one of the listed CAs.
  • The certificates MUST be signed using an acceptable signature algorithm, as described in Section 4.3.2. Note that this relaxes the constraints on certificate-signing algorithms found in prior versions of TLS.
  • If the CertificateRequest message contained a non-empty “oid_filters” extension, the end-entity certificate MUST match the extension OIDs that are recognized by the client, as described in Section 4.2.5.

Note that, as with the server certificate, there are certificates that use algorithm combinations that cannot be currently used with TLS.

4.4.2.4.Receiving a Certificate Message

In general, detailed certificate validation procedures are out of scope for TLS (see [RFC5280]). This section provides TLS-specific requirements.

If the server supplies an empty Certificate message, the client MUST abort the handshake with a “decode_error” alert.

If the client does not send any certificates (i.e., it sends an empty Certificate message), the server MAY at its discretion either continue the handshake without client authentication, or abort the handshake with a “certificate_required” alert. Also, if some aspect of the certificate chain was unacceptable (e.g., it was not signed by a known, trusted CA), the server MAY at its discretion either continue the handshake (considering the client unauthenticated) or abort the handshake.

Any endpoint receiving any certificate which it would need to validate using any signature algorithm using an MD5 hash MUST abort the handshake with a “bad_certificate” alert. SHA-1 is deprecated and it is RECOMMENDED that any endpoint receiving any certificate which it would need to validate using any signature algorithm using a SHA-1 hash abort the handshake with a “bad_certificate” alert. For clarity, this means that endpoints MAY accept these algorithms for certificates that are self-signed or are trust anchors.

All endpoints are RECOMMENDED to transition to SHA-256 or better as soon as possible to maintain interoperability with implementations currently in the process of phasing out SHA-1 support.

Note that a certificate containing a key for one signature algorithm MAY be signed using a different signature algorithm (for instance, an RSA key signed with an ECDSA key).

4.4.3.Certificate Verify

This message is used to provide explicit proof that an endpoint possesses the private key corresponding to its certificate. The CertificateVerify message also provides integrity for the handshake up to this point. Servers MUST send this message when authenticating via a certificate. Clients MUST send this message whenever authenticating via a certificate (i.e., when the Certificate message is non-empty). When sent, this message MUST appear immediately after the Certificate message and immediately prior to the Finished message.

Structure of this message:

   struct {
       SignatureScheme algorithm;
       opaque signature<0..2^16-1>;
   } CertificateVerify;

The algorithm field specifies the signature algorithm used (see Section 4.2.3 for the definition of this field). The signature is a digital signature using that algorithm. The content that is covered under the signature is the hash output as described in Section 4.4.1, namely:

   Transcript-Hash(Handshake Context, Certificate)

The digital signature is then computed over the concatenation of:

  • A string that consists of octet 32 (0x20) repeated 64 times
  • The context string
  • A single 0 byte which serves as the separator
  • The content to be signed

This structure is intended to prevent an attack on previous versions of TLS in which the ServerKeyExchange format meant that attackers could obtain a signature of a message with a chosen 32-byte prefix (ClientHello.random). The initial 64-byte pad clears that prefix along with the server-controlled ServerHello.random.

The context string for a server signature is: “TLS 1.3, server CertificateVerify” The context string for a client signature is: “TLS 1.3, client CertificateVerify” It is used to provide separation between signatures made in different contexts, helping against potential cross-protocol attacks.

For example, if the transcript hash was 32 bytes of 01 (this length would make sense for SHA-256), the content covered by the digital signature for a server CertificateVerify would be:

   2020202020202020202020202020202020202020202020202020202020202020
   2020202020202020202020202020202020202020202020202020202020202020
   544c5320312e332c207365727665722043657274696669636174655665726966
   79
   00
   0101010101010101010101010101010101010101010101010101010101010101

On the sender side the process for computing the signature field of the CertificateVerify message takes as input:

  • The content covered by the digital signature
  • The private signing key corresponding to the certificate sent in the previous message

If the CertificateVerify message is sent by a server, the signature algorithm MUST be one offered in the client’s “signature_algorithms” extension unless no valid certificate chain can be produced without unsupported algorithms (see Section 4.2.3).

If sent by a client, the signature algorithm used in the signature MUST be one of those present in the supported_signature_algorithms field of the “signature_algorithms” extension in the CertificateRequest message.

In addition, the signature algorithm MUST be compatible with the key in the sender’s end-entity certificate. RSA signatures MUST use an RSASSA-PSS algorithm, regardless of whether RSASSA-PKCS1-v1_5 algorithms appear in “signature_algorithms”. The SHA-1 algorithm MUST NOT be used in any signatures of CertificateVerify messages. All SHA-1 signature algorithms in this specification are defined solely for use in legacy certificates and are not valid for CertificateVerify signatures.

The receiver of a CertificateVerify message MUST verify the signature field. The verification process takes as input:

  • The content covered by the digital signature
  • The public key contained in the end-entity certificate found in the associated Certificate message.
  • The digital signature received in the signature field of the CertificateVerify message

If the verification fails, the receiver MUST terminate the handshake with a “decrypt_error” alert.

4.4.4.Finished

The Finished message is the final message in the authentication block. It is essential for providing authentication of the handshake and of the computed keys.

Recipients of Finished messages MUST verify that the contents are correct and if incorrect MUST terminate the connection with a “decrypt_error” alert.

Once a side has sent its Finished message and received and validated the Finished message from its peer, it may begin to send and receive application data over the connection. There are two settings in which it is permitted to send data prior to receiving the peer’s Finished:

  1. Clients sending 0-RTT data as described in Section 4.2.10.
  2. Servers MAY send data after sending their first flight, but because the handshake is not yet complete, they have no assurance of either the peer’s identity or of its liveness (i.e., the ClientHello might have been replayed).

The key used to compute the Finished message is computed from the Base key defined in Section 4.4 using HKDF (see Section 7.1). Specifically:

finished_key =
    HKDF-Expand-Label(BaseKey, "finished", "", Hash.length)

Structure of this message:

   struct {
       opaque verify_data[Hash.length];
   } Finished;

The verify_data value is computed as follows:

   verify_data =
       HMAC(finished_key,
            Transcript-Hash(Handshake Context,
                            Certificate*, CertificateVerify*))

   * Only included if present.

HMAC [RFC2104] uses the Hash algorithm for the handshake. As noted above, the HMAC input can generally be implemented by a running hash, i.e., just the handshake hash at this point.

In previous versions of TLS, the verify_data was always 12 octets long. In TLS 1.3, it is the size of the HMAC output for the Hash used for the handshake.

Note: Alerts and any other record types are not handshake messages and are not included in the hash computations.

Any records following a Finished message MUST be encrypted under the appropriate application traffic key as described in Section 7.2. In particular, this includes any alerts sent by the server in response to client Certificate and CertificateVerify messages.

   struct {} EndOfEarlyData;

If the server sent an “early_data” extension, the client MUST send an EndOfEarlyData message after receiving the server Finished. If the server does not send an “early_data” extension, then the client MUST NOT send an EndOfEarlyData message. This message indicates that all 0-RTT application_data messages, if any, have been transmitted and that the following records are protected under handshake traffic keys. Servers MUST NOT send this message and clients receiving it MUST terminate the connection with an “unexpected_message” alert. This message is encrypted under keys derived from the client_early_traffic_secret.

TLS also allows other messages to be sent after the main handshake. These messages use a handshake content type and are encrypted under the appropriate application traffic key.

4.6.1.New Session Ticket Message

At any time after the server has received the client Finished message, it MAY send a NewSessionTicket message. This message creates a unique association between the ticket value and a secret PSK derived from the resumption master secret (see Section 7).

The client MAY use this PSK for future handshakes by including the ticket value in the “pre_shared_key” extension in its ClientHello (Section 4.2.11). Servers MAY send multiple tickets on a single connection, either immediately after each other or after specific events (see Appendix C.4). For instance, the server might send a new ticket after post-handshake authentication in order to encapsulate the additional client authentication state. Multiple tickets are useful for clients for a variety of purposes, including:

  • Opening multiple parallel HTTP connections.
  • Performing connection racing across interfaces and address families via, e.g., Happy Eyeballs [RFC8305] or related techniques.

Any ticket MUST only be resumed with a cipher suite that has the same KDF hash algorithm as that used to establish the original connection.

Clients MUST only resume if the new SNI value is valid for the server certificate presented in the original session, and SHOULD only resume if the SNI value matches the one used in the original session. The latter is a performance optimization: normally, there is no reason to expect that different servers covered by a single certificate would be able to accept each other’s tickets, hence attempting resumption in that case would waste a single-use ticket. If such an indication is provided (externally or by any other means), clients MAY resume with a different SNI value.

On resumption, if reporting an SNI value to the calling application, implementations MUST use the value sent in the resumption ClientHello rather than the value sent in the previous session. Note that if a server implementation declines all PSK identities with different SNI values, these two values are always the same.

Note: Although the resumption master secret depends on the client’s second flight, servers which do not request client authentication MAY compute the remainder of the transcript independently and then send a NewSessionTicket immediately upon sending its Finished rather than waiting for the client Finished. This might be appropriate in cases where the client is expected to open multiple TLS connections in parallel and would benefit from the reduced overhead of a resumption handshake, for example.

   struct {
       uint32 ticket_lifetime;
       uint32 ticket_age_add;
       opaque ticket_nonce<0..255>;
       opaque ticket<1..2^16-1>;
       Extension extensions<0..2^16-2>;
   } NewSessionTicket;
ticket_lifetime
Indicates the lifetime in seconds as a 32-bit unsigned integer in network byte order from the time of ticket issuance. Servers MUST NOT use any value greater than 604800 seconds (7 days). The value of zero indicates that the ticket should be discarded immediately. Clients MUST NOT cache tickets for longer than 7 days, regardless of the ticket_lifetime, and MAY delete tickets earlier based on local policy. A server MAY treat a ticket as valid for a shorter period of time than what is stated in the ticket_lifetime.
ticket_age_add
A securely generated, random 32-bit value that is used to obscure the age of the ticket that the client includes in the “pre_shared_key” extension. The client-side ticket age is added to this value modulo 2^32 to obtain the value that is transmitted by the client. The server MUST generate a fresh value for each ticket it sends.
ticket_nonce
A per-ticket value that is unique across all tickets issued on this connection.
ticket
The value of the ticket to be used as the PSK identity. The ticket itself is an opaque label. It MAY either be a database lookup key or a self-encrypted and self-authenticated value. Section 4 of [RFC5077] describes a recommended ticket construction mechanism.
extensions
A set of extension values for the ticket. The “Extension” format is defined in Section 4.2. Clients MUST ignore unrecognized extensions.

The sole extension currently defined for NewSessionTicket is “early_data”, indicating that the ticket may be used to send 0-RTT data (Section 4.2.10)). It contains the following value:

max_early_data_size
The maximum amount of 0-RTT data that the client is allowed to send when using this ticket, in bytes. Only Application Data payload (i.e., plaintext but not padding or the inner content type byte) is counted. A server receiving more than max_early_data_size bytes of 0-RTT data SHOULD terminate the connection with an “unexpected_message” alert. Note that servers that reject early data due to lack of cryptographic material will be unable to differentiate padding from content, so clients SHOULD NOT depend on being able to send large quantities of padding in early data records.

The PSK associated with the ticket is computed as:

    HKDF-Expand-Label(resumption_master_secret,
                     "resumption", ticket_nonce, Hash.length)

Because the ticket_nonce value is distinct for each NewSessionTicket message, a different PSK will be derived for each ticket.

Note that in principle it is possible to continue issuing new tickets which indefinitely extend the lifetime of the keying material originally derived from an initial non-PSK handshake (which was most likely tied to the peer’s certificate). It is RECOMMENDED that implementations place limits on the total lifetime of such keying material; these limits should take into account the lifetime of the peer’s certificate, the likelihood of intervening revocation, and the time since the peer’s online CertificateVerify signature.

4.6.2.Post-Handshake Authentication

When the client has sent the “post_handshake_auth” extension (see Section 4.2.6), a server MAY request client authentication at any time after the handshake has completed by sending a CertificateRequest message. The client MUST respond with the appropriate Authentication messages (see Section 4.4). If the client chooses to authenticate, it MUST send Certificate, CertificateVerify, and Finished. If it declines, it MUST send a Certificate message containing no certificates followed by Finished. All of the client’s messages for a given response MUST appear consecutively on the wire with no intervening messages of other types.

A client that receives a CertificateRequest message without having sent the “post_handshake_auth” extension MUST send an “unexpected_message” fatal alert.

Note: Because client authentication could involve prompting the user, servers MUST be prepared for some delay, including receiving an arbitrary number of other messages between sending the CertificateRequest and receiving a response. In addition, clients which receive multiple CertificateRequests in close succession MAY respond to them in a different order than they were received (the certificate_request_context value allows the server to disambiguate the responses).

4.6.3.Key and IV Update

   enum {
       update_not_requested(0), update_requested(1), (255)
   } KeyUpdateRequest;

   struct {
       KeyUpdateRequest request_update;
   } KeyUpdate;
request_update
Indicates whether the recipient of the KeyUpdate should respond with its own KeyUpdate. If an implementation receives any other value, it MUST terminate the connection with an “illegal_parameter” alert.

The KeyUpdate handshake message is used to indicate that the sender is updating its sending cryptographic keys. This message can be sent by either peer after it has sent a Finished message. Implementations that receive a KeyUpdate message prior to receiving a Finished message MUST terminate the connection with an “unexpected_message” alert. After sending a KeyUpdate message, the sender SHALL send all its traffic using the next generation of keys, computed as described in Section 7.2. Upon receiving a KeyUpdate, the receiver MUST update its receiving keys.

If the request_update field is set to “update_requested” then the receiver MUST send a KeyUpdate of its own with request_update set to “update_not_requested” prior to sending its next application data record. This mechanism allows either side to force an update to the entire connection, but causes an implementation which receives multiple KeyUpdates while it is silent to respond with a single update. Note that implementations may receive an arbitrary number of messages between sending a KeyUpdate with request_update set to update_requested and receiving the peer’s KeyUpdate, because those messages may already be in flight. However, because send and receive keys are derived from independent traffic secrets, retaining the receive traffic secret does not threaten the forward secrecy of data sent before the sender changed keys.

If implementations independently send their own KeyUpdates with request_update set to “update_requested”, and they cross in flight, then each side will also send a response, with the result that each side increments by two generations.

Both sender and receiver MUST encrypt their KeyUpdate messages with the old keys. Additionally, both sides MUST enforce that a KeyUpdate with the old key is received before accepting any messages encrypted with the new key. Failure to do so may allow message truncation attacks.

The TLS record protocol takes messages to be transmitted, fragments the data into manageable blocks, protects the records, and transmits the result. Received data is verified, decrypted, reassembled, and then delivered to higher-level clients.

TLS records are typed, which allows multiple higher-level protocols to be multiplexed over the same record layer. This document specifies four content types: handshake, application data, alert, and change_cipher_spec. The change_cipher_spec record is used only for compatibility purposes (see Appendix D.4).

An implementation may receive an unencrypted record of type change_cipher_spec consisting of the single byte value 0x01 at any time after the first ClientHello message has been sent or received and before the peer’s Finished message has been received and MUST simply drop it without further processing. Note that this record may appear at a point at the handshake where the implementation is expecting protected records and so it is necessary to detect this condition prior to attempting to deprotect the record. An implementation which receives any other change_cipher_spec value or which receives a protected change_cipher_spec record MUST abort the handshake with an “unexpected_message” alert. A change_cipher_spec record received before the first ClientHello message or after the peer’s Finished message MUST be treated as an unexpected record type (though stateless servers may not be able to distinguish these cases from allowed cases).

Implementations MUST NOT send record types not defined in this document unless negotiated by some extension. If a TLS implementation receives an unexpected record type, it MUST terminate the connection with an “unexpected_message” alert. New record content type values are assigned by IANA in the TLS Content Type Registry as described in Section 11.

The record layer fragments information blocks into TLSPlaintext records carrying data in chunks of 2^14 bytes or less. Message boundaries are handled differently depending on the underlying ContentType. Any future content types MUST specify appropriate rules. Note that these rules are stricter than what was enforced in TLS 1.2.

Handshake messages MAY be coalesced into a single TLSPlaintext record or fragmented across several records, provided that:

  • Handshake messages MUST NOT be interleaved with other record types. That is, if a handshake message is split over two or more records, there MUST NOT be any other records between them.
  • Handshake messages MUST NOT span key changes. Implementations MUST verify that all messages immediately preceding a key change align with a record boundary; if not, then they MUST terminate the connection with an “unexpected_message” alert. Because the ClientHello, EndOfEarlyData, ServerHello, Finished, and KeyUpdate messages can immediately precede a key change, implementations MUST send these messages in alignment with a record boundary.

Implementations MUST NOT send zero-length fragments of Handshake types, even if those fragments contain padding.

Alert messages (Section 6) MUST NOT be fragmented across records and multiple Alert messages MUST NOT be coalesced into a single TLSPlaintext record. In other words, a record with an Alert type MUST contain exactly one message.

Application Data messages contain data that is opaque to TLS. Application Data messages are always protected. Zero-length fragments of Application Data MAY be sent as they are potentially useful as a traffic analysis countermeasure. Application Data fragments MAY be split across multiple records or coalesced into a single record.

   enum {
       invalid(0),
       change_cipher_spec(20),
       alert(21),
       handshake(22),
       application_data(23),
       (255)
   } ContentType;

   struct {
       ContentType type;
       ProtocolVersion legacy_record_version;
       uint16 length;
       opaque fragment[TLSPlaintext.length];
   } TLSPlaintext;
type
The higher-level protocol used to process the enclosed fragment.
legacy_record_version
This value MUST be set to 0x0303 for all records generated by a TLS 1.3 implementation other than an initial ClientHello (i.e., one not generated after a HelloRetryRequest), where it MAY also be 0x0301 for compatibility purposes. This field is deprecated and MUST be ignored for all purposes. Previous versions of TLS would use other values in this field under some circumstances.
length
The length (in bytes) of the following TLSPlaintext.fragment. The length MUST NOT exceed 2^14 bytes. An endpoint that receives a record that exceeds this length MUST terminate the connection with a “record_overflow” alert.
fragment
The data being transmitted. This value is transparent and is treated as an independent block to be dealt with by the higher-level protocol specified by the type field.

This document describes TLS 1.3, which uses the version 0x0304. This version value is historical, deriving from the use of 0x0301 for TLS 1.0 and 0x0300 for SSL 3.0. In order to maximize backwards compatibility, records containing an initial ClientHello SHOULD have version 0x0301 and a record containing a second ClientHello or a ServerHello MUST have version 0x0303, reflecting TLS 1.0 and TLS 1.2 respectively. When negotiating prior versions of TLS, endpoints follow the procedure and requirements in Appendix D.

When record protection has not yet been engaged, TLSPlaintext structures are written directly onto the wire. Once record protection has started, TLSPlaintext records are protected and sent as described in the following section. Note that application data records MUST NOT be written to the wire unprotected (see Section 2 for details).

The record protection functions translate a TLSPlaintext structure into a TLSCiphertext. The deprotection functions reverse the process. In TLS 1.3, as opposed to previous versions of TLS, all ciphers are modeled as “Authenticated Encryption with Additional Data” (AEAD) [RFC5116]. AEAD functions provide an unified encryption and authentication operation which turns plaintext into authenticated ciphertext and back again. Each encrypted record consists of a plaintext header followed by an encrypted body, which itself contains a type and optional padding.

   struct {
       opaque content[TLSPlaintext.length];
       ContentType type;
       uint8 zeros[length_of_padding];
   } TLSInnerPlaintext;

   struct {
       ContentType opaque_type = application_data; /* 23 */
       ProtocolVersion legacy_record_version = 0x0303; /* TLS v1.2 */
       uint16 length;
       opaque encrypted_record[TLSCiphertext.length];
   } TLSCiphertext;
content
The TLSPlaintext.fragment value, containing the byte encoding of a handshake or an alert message, or the raw bytes of the application’s data to send.
type
The TLSPlaintext.type value containing the content type of the record.
zeros
An arbitrary-length run of zero-valued bytes may appear in the cleartext after the type field. This provides an opportunity for senders to pad any TLS record by a chosen amount as long as the total stays within record size limits. See Section 5.4 for more details.
opaque_type
The outer opaque_type field of a TLSCiphertext record is always set to the value 23 (application_data) for outward compatibility with middleboxes accustomed to parsing previous versions of TLS. The actual content type of the record is found in TLSInnerPlaintext.type after decryption.
legacy_record_version
The legacy_record_version field is always 0x0303. TLS 1.3 TLSCiphertexts are not generated until after TLS 1.3 has been negotiated, so there are no historical compatibility concerns where other values might be received. Note that the handshake protocol including the ClientHello and ServerHello messages authenticates the protocol version, so this value is redundant.
length
The length (in bytes) of the following TLSCiphertext.encrypted_record, which is the sum of the lengths of the content and the padding, plus one for the inner content type, plus any expansion added by the AEAD algorithm. The length MUST NOT exceed 2^14 + 256 bytes. An endpoint that receives a record that exceeds this length MUST terminate the connection with a “record_overflow” alert.
encrypted_record
The AEAD-encrypted form of the serialized TLSInnerPlaintext structure.

AEAD algorithms take as input a single key, a nonce, a plaintext, and “additional data” to be included in the authentication check, as described in Section 2.1 of [RFC5116]. The key is either the client_write_key or the server_write_key, the nonce is derived from the sequence number and the client_write_iv or server_write_iv (see Section 5.3), and the additional data input is the record header. I.e.,

   additional_data = TLSCiphertext.opaque_type ||
                     TLSCiphertext.legacy_record_version ||
                     TLSCiphertext.length

The plaintext input to the AEAD algorithm is the encoded TLSInnerPlaintext structure. Derivation of traffic keys is defined in Section 7.3.

The AEAD output consists of the ciphertext output from the AEAD encryption operation. The length of the plaintext is greater than the corresponding TLSPlaintext.length due to the inclusion of TLSInnerPlaintext.type and any padding supplied by the sender. The length of the AEAD output will generally be larger than the plaintext, but by an amount that varies with the AEAD algorithm. Since the ciphers might incorporate padding, the amount of overhead could vary with different lengths of plaintext. Symbolically,

   AEADEncrypted =
       AEAD-Encrypt(write_key, nonce, additional_data, plaintext)

Then the encrypted_record field of TLSCiphertext is set to AEADEncrypted.

In order to decrypt and verify, the cipher takes as input the key, nonce, additional data, and the AEADEncrypted value. The output is either the plaintext or an error indicating that the decryption failed. There is no separate integrity check. That is:

   plaintext of encrypted_record =
       AEAD-Decrypt(peer_write_key, nonce, additional_data, AEADEncrypted)

If the decryption fails, the receiver MUST terminate the connection with a “bad_record_mac” alert.

An AEAD algorithm used in TLS 1.3 MUST NOT produce an expansion greater than 255 octets. An endpoint that receives a record from its peer with TLSCiphertext.length larger than 2^14 + 256 octets MUST terminate the connection with a “record_overflow” alert. This limit is derived from the maximum TLSInnerPlaintext length of 2^14 octets + 1 octet for ContentType + the maximum AEAD expansion of 255 octets.

A 64-bit sequence number is maintained separately for reading and writing records. The appropriate sequence number is incremented by one after reading or writing each record. Each sequence number is set to zero at the beginning of a connection and whenever the key is changed; the first record transmitted under a particular traffic key MUST use sequence number 0.

Because the size of sequence numbers is 64-bit, they should not wrap. If a TLS implementation would need to wrap a sequence number, it MUST either re-key (Section 4.6.3) or terminate the connection.

Each AEAD algorithm will specify a range of possible lengths for the per-record nonce, from N_MIN bytes to N_MAX bytes of input ([RFC5116]). The length of the TLS per-record nonce (iv_length) is set to the larger of 8 bytes and N_MIN for the AEAD algorithm (see [RFC5116] Section 4). An AEAD algorithm where N_MAX is less than 8 bytes MUST NOT be used with TLS. The per-record nonce for the AEAD construction is formed as follows:

  1. The 64-bit record sequence number is encoded in network byte order and padded to the left with zeros to iv_length.
  2. The padded sequence number is XORed with the static client_write_iv or server_write_iv, depending on the role.

The resulting quantity (of length iv_length) is used as the per-record nonce.

Note: This is a different construction from that in TLS 1.2, which specified a partially explicit nonce.

All encrypted TLS records can be padded to inflate the size of the TLSCiphertext. This allows the sender to hide the size of the traffic from an observer.

When generating a TLSCiphertext record, implementations MAY choose to pad. An unpadded record is just a record with a padding length of zero. Padding is a string of zero-valued bytes appended to the ContentType field before encryption. Implementations MUST set the padding octets to all zeros before encrypting.

Application Data records may contain a zero-length TLSInnerPlaintext.content if the sender desires. This permits generation of plausibly-sized cover traffic in contexts where the presence or absence of activity may be sensitive. Implementations MUST NOT send Handshake or Alert records that have a zero-length TLSInnerPlaintext.content; if such a message is received, the receiving implementation MUST terminate the connection with an “unexpected_message” alert.

The padding sent is automatically verified by the record protection mechanism; upon successful decryption of a TLSCiphertext.encrypted_record, the receiving implementation scans the field from the end toward the beginning until it finds a non-zero octet. This non-zero octet is the content type of the message. This padding scheme was selected because it allows padding of any encrypted TLS record by an arbitrary size (from zero up to TLS record size limits) without introducing new content types. The design also enforces all-zero padding octets, which allows for quick detection of padding errors.

Implementations MUST limit their scanning to the cleartext returned from the AEAD decryption. If a receiving implementation does not find a non-zero octet in the cleartext, it MUST terminate the connection with an “unexpected_message” alert.

The presence of padding does not change the overall record size limitations - the full encoded TLSInnerPlaintext MUST NOT exceed 2^14 + 1 octets. If the maximum fragment length is reduced, as for example by the max_fragment_length extension from [RFC6066], then the reduced limit applies to the full plaintext, including the content type and padding.

Selecting a padding policy that suggests when and how much to pad is a complex topic and is beyond the scope of this specification. If the application layer protocol on top of TLS has its own padding, it may be preferable to pad application_data TLS records within the application layer. Padding for encrypted handshake and alert TLS records must still be handled at the TLS layer, though. Later documents may define padding selection algorithms or define a padding policy request mechanism through TLS extensions or some other means.

There are cryptographic limits on the amount of plaintext which can be safely encrypted under a given set of keys. [AEAD-LIMITS] provides an analysis of these limits under the assumption that the underlying primitive (AES or ChaCha20) has no weaknesses. Implementations SHOULD do a key update as described in Section 4.6.3 prior to reaching these limits.

For AES-GCM, up to 2^24.5 full-size records (about 24 million) may be encrypted on a given connection while keeping a safety margin of approximately 2^-57 for Authenticated Encryption (AE) security. For ChaCha20/Poly1305, the record sequence number would wrap before the safety limit is reached.

One of the content types supported by the TLS record layer is the alert type. Like other messages, alert messages are encrypted as specified by the current connection state.

Alert messages convey a description of the alert and a legacy field that conveyed the severity of the message in previous versions of TLS. Alerts are divided into two classes: closure alerts and error alerts. In TLS 1.3, the severity is implicit in the type of alert being sent, and the ‘level’ field can safely be ignored. The “close_notify” alert is used to indicate orderly closure of one direction of the connection. Upon receiving such an alert, the TLS implementation SHOULD indicate end-of-data to the application.

Error alerts indicate abortive closure of the connection (see Section 6.2). Upon receiving an error alert, the TLS implementation SHOULD indicate an error to the application and MUST NOT allow any further data to be sent or received on the connection. Servers and clients MUST forget the secret values and keys established in failed connections, with the exception of the PSKs associated with session tickets, which SHOULD be discarded if possible.

All the alerts listed in Section 6.2 MUST be sent with AlertLevel=fatal and MUST be treated as error alerts regardless of the AlertLevel in the message. Unknown alert types MUST be treated as error alerts.

Note: TLS defines two generic alerts (see Section 6) to use upon failure to parse a message. Peers which receive a message which cannot be parsed according to the syntax (e.g., have a length extending beyond the message boundary or contain an out-of-range length) MUST terminate the connection with a “decode_error” alert. Peers which receive a message which is syntactically correct but semantically invalid (e.g., a DHE share of p - 1, or an invalid enum) MUST terminate the connection with an “illegal_parameter” alert.

   enum { warning(1), fatal(2), (255) } AlertLevel;

   enum {
       close_notify(0),
       unexpected_message(10),
       bad_record_mac(20),
       record_overflow(22),
       handshake_failure(40),
       bad_certificate(42),
       unsupported_certificate(43),
       certificate_revoked(44),
       certificate_expired(45),
       certificate_unknown(46),
       illegal_parameter(47),
       unknown_ca(48),
       access_denied(49),
       decode_error(50),
       decrypt_error(51),
       protocol_version(70),
       insufficient_security(71),
       internal_error(80),
       inappropriate_fallback(86),
       user_canceled(90),
       missing_extension(109),
       unsupported_extension(110),
       unrecognized_name(112),
       bad_certificate_status_response(113),
       unknown_psk_identity(115),
       certificate_required(116),
       no_application_protocol(120),
       (255)
   } AlertDescription;

   struct {
       AlertLevel level;
       AlertDescription description;
   } Alert;

The client and the server must share knowledge that the connection is ending in order to avoid a truncation attack.

close_notify
This alert notifies the recipient that the sender will not send any more messages on this connection. Any data received after a closure alert has been received MUST be ignored.
user_canceled
This alert notifies the recipient that the sender is canceling the handshake for some reason unrelated to a protocol failure. If a user cancels an operation after the handshake is complete, just closing the connection by sending a “close_notify” is more appropriate. This alert SHOULD be followed by a “close_notify”. This alert generally has AlertLevel=warning.

Either party MAY initiate a close of its write side of the connection by sending a “close_notify” alert. Any data received after a closure alert has been received MUST be ignored. If a transport-level close is received prior to a “close_notify”, the receiver cannot know that all the data that was sent has been received.

Each party MUST send a “close_notify” alert before closing its write side of the connection, unless it has already sent some error alert. This does not have any effect on its read side of the connection. Note that this is a change from versions of TLS prior to TLS 1.3 in which implementations were required to react to a “close_notify” by discarding pending writes and sending an immediate “close_notify” alert of their own. That previous requirement could cause truncation in the read side. Both parties need not wait to receive a “close_notify” alert before closing their read side of the connection, though doing so would introduce the possibility of truncation.

If the application protocol using TLS provides that any data may be carried over the underlying transport after the TLS connection is closed, the TLS implementation MUST receive a “close_notify” alert before indicating end-of-data to the application-layer. No part of this standard should be taken to dictate the manner in which a usage profile for TLS manages its data transport, including when connections are opened or closed.

Note: It is assumed that closing the write side of a connection reliably delivers pending data before destroying the transport.

Error handling in the TLS Handshake Protocol is very simple. When an error is detected, the detecting party sends a message to its peer. Upon transmission or receipt of a fatal alert message, both parties MUST immediately close the connection.

Whenever an implementation encounters a fatal error condition, it SHOULD send an appropriate fatal alert and MUST close the connection without sending or receiving any additional data. In the rest of this specification, when the phrases “terminate the connection” and “abort the handshake” are used without a specific alert it means that the implementation SHOULD send the alert indicated by the descriptions below. The phrases “terminate the connection with a X alert” and “abort the handshake with a X alert” mean that the implementation MUST send alert X if it sends any alert. All alerts defined in this section below, as well as all unknown alerts, are universally considered fatal as of TLS 1.3 (see Section 6). The implementation SHOULD provide a way to facilitate logging the sending and receiving of alerts.

The following error alerts are defined:

unexpected_message
An inappropriate message (e.g., the wrong handshake message, premature application data, etc.) was received. This alert should never be observed in communication between proper implementations.
bad_record_mac
This alert is returned if a record is received which cannot be deprotected. Because AEAD algorithms combine decryption and verification, and also to avoid side channel attacks, this alert is used for all deprotection failures. This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network.
record_overflow
A TLSCiphertext record was received that had a length more than 2^14 + 256 bytes, or a record decrypted to a TLSPlaintext record with more than 2^14 bytes (or some other negotiated limit). This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network.
handshake_failure
Receipt of a “handshake_failure” alert message indicates that the sender was unable to negotiate an acceptable set of security parameters given the options available.
bad_certificate
A certificate was corrupt, contained signatures that did not verify correctly, etc.
unsupported_certificate
A certificate was of an unsupported type.
certificate_revoked
A certificate was revoked by its signer.
certificate_expired
A certificate has expired or is not currently valid.
certificate_unknown
Some other (unspecified) issue arose in processing the certificate, rendering it unacceptable.
illegal_parameter
A field in the handshake was incorrect or inconsistent with other fields. This alert is used for errors which conform to the formal protocol syntax but are otherwise incorrect.
unknown_ca
A valid certificate chain or partial chain was received, but the certificate was not accepted because the CA certificate could not be located or could not be matched with a known trust anchor.
access_denied
A valid certificate or PSK was received, but when access control was applied, the sender decided not to proceed with negotiation.
decode_error
A message could not be decoded because some field was out of the specified range or the length of the message was incorrect. This alert is used for errors where the message does not conform to the formal protocol syntax. This alert should never be observed in communication between proper implementations, except when messages were corrupted in the network.
decrypt_error
A handshake (not record-layer) cryptographic operation failed, including being unable to correctly verify a signature or validate a Finished message or a PSK binder.
protocol_version
The protocol version the peer has attempted to negotiate is recognized but not supported. (see Appendix D)
insufficient_security
Returned instead of “handshake_failure” when a negotiation has failed specifically because the server requires parameters more secure than those supported by the client.
internal_error
An internal error unrelated to the peer or the correctness of the protocol (such as a memory allocation failure) makes it impossible to continue.
inappropriate_fallback
Sent by a server in response to an invalid connection retry attempt from a client (see [RFC7507]).
missing_extension
Sent by endpoints that receive a handshake message not containing an extension that is mandatory to send for the offered TLS version or other negotiated parameters.
unsupported_extension
Sent by endpoints receiving any handshake message containing an extension known to be prohibited for inclusion in the given handshake message, or including any extensions in a ServerHello or Certificate not first offered in the corresponding ClientHello.
unrecognized_name
Sent by servers when no server exists identified by the name provided by the client via the “server_name” extension (see [RFC6066]).
bad_certificate_status_response
Sent by clients when an invalid or unacceptable OCSP response is provided by the server via the “status_request” extension (see [RFC6066]).
unknown_psk_identity
Sent by servers when PSK key establishment is desired but no acceptable PSK identity is provided by the client. Sending this alert is OPTIONAL; servers MAY instead choose to send a “decrypt_error” alert to merely indicate an invalid PSK identity.
certificate_required
Sent by servers when a client certificate is desired but none was provided by the client.
no_application_protocol
Sent by servers when a client “application_layer_protocol_negotiation” extension advertises only protocols that the server does not support (see [RFC7301]).

New Alert values are assigned by IANA as described in Section 11.

The TLS handshake establishes one or more input secrets which are combined to create the actual working keying material, as detailed below. The key derivation process incorporates both the input secrets and the handshake transcript. Note that because the handshake transcript includes the random values from the Hello messages, any given handshake will have different traffic secrets, even if the same input secrets are used, as is the case when the same PSK is used for multiple connections.

The key derivation process makes use of the HKDF-Extract and HKDF-Expand functions as defined for HKDF [RFC5869], as well as the functions defined below:

    HKDF-Expand-Label(Secret, Label, Context, Length) =
         HKDF-Expand(Secret, HkdfLabel, Length)

    Where HkdfLabel is specified as:

    struct {
        uint16 length = Length;
        opaque label<7..255> = "tls13 " + Label;
        opaque context<0..255> = Context;
    } HkdfLabel;

    Derive-Secret(Secret, Label, Messages) =
         HKDF-Expand-Label(Secret, Label,
                           Transcript-Hash(Messages), Hash.length)

The Hash function used by Transcript-Hash and HKDF is the cipher suite hash algorithm. Hash.length is its output length in bytes. Messages is the concatenation of the indicated handshake messages, including the handshake message type and length fields, but not including record layer headers. Note that in some cases a zero-length Context (indicated by “”) is passed to HKDF-Expand-Label. The Labels specified in this document are all ASCII strings, and do not include a trailing NUL byte.

Note: with common hash functions, any label longer than 12 characters requires an additional iteration of the hash function to compute. The labels in this specification have all been chosen to fit within this limit.

Keys are derived from two input secrets using the HKDF-Extract and Derive-Secret functions. The general pattern for adding a new secret is to use HKDF-Extract with the salt being the current secret state and the IKM being the new secret to be added. In this version of TLS 1.3, the two input secrets are:

  • PSK (a pre-shared key established externally or derived from the resumption_master_secret value from a previous connection)
  • (EC)DHE shared secret (Section 7.4)

This produces a full key derivation schedule shown in the diagram below. In this diagram, the following formatting conventions apply:

  • HKDF-Extract is drawn as taking the Salt argument from the top and the IKM argument from the left, with its output to the bottom and the name of the output on the right.
  • Derive-Secret’s Secret argument is indicated by the incoming arrow. For instance, the Early Secret is the Secret for generating the client_early_traffic_secret.
  • “0” indicates a string of Hash-lengths bytes set to 0.
                 0
                 |
                 v
   PSK ->  HKDF-Extract = Early Secret
                 |
                 +-----> Derive-Secret(.,
                 |                     "ext binder" |
                 |                     "res binder",
                 |                     "")
                 |                     = binder_key
                 |
                 +-----> Derive-Secret(., "c e traffic",
                 |                     ClientHello)
                 |                     = client_early_traffic_secret
                 |
                 +-----> Derive-Secret(., "e exp master",
                 |                     ClientHello)
                 |                     = early_exporter_master_secret
                 v
           Derive-Secret(., "derived", "")
                 |
                 v
(EC)DHE -> HKDF-Extract = Handshake Secret
                 |
                 +-----> Derive-Secret(., "c hs traffic",
                 |                     ClientHello...ServerHello)
                 |                     = client_handshake_traffic_secret
                 |
                 +-----> Derive-Secret(., "s hs traffic",
                 |                     ClientHello...ServerHello)
                 |                     = server_handshake_traffic_secret
                 v
           Derive-Secret(., "derived", "")
                 |
                 v
      0 -> HKDF-Extract = Master Secret
                 |
                 +-----> Derive-Secret(., "c ap traffic",
                 |                     ClientHello...server Finished)
                 |                     = client_application_traffic_secret_0
                 |
                 +-----> Derive-Secret(., "s ap traffic",
                 |                     ClientHello...server Finished)
                 |                     = server_application_traffic_secret_0
                 |
                 +-----> Derive-Secret(., "exp master",
                 |                     ClientHello...server Finished)
                 |                     = exporter_master_secret
                 |
                 +-----> Derive-Secret(., "res master",
                                       ClientHello...client Finished)
                                       = resumption_master_secret

The general pattern here is that the secrets shown down the left side of the diagram are just raw entropy without context, whereas the secrets down the right side include handshake context and therefore can be used to derive working keys without additional context. Note that the different calls to Derive-Secret may take different Messages arguments, even with the same secret. In a 0-RTT exchange, Derive-Secret is called with four distinct transcripts; in a 1-RTT-only exchange with three distinct transcripts.

If a given secret is not available, then the 0-value consisting of a string of Hash.length bytes set to zeros is used. Note that this does not mean skipping rounds, so if PSK is not in use Early Secret will still be HKDF-Extract(0, 0). For the computation of the binder_secret, the label is “ext binder” for external PSKs (those provisioned outside of TLS) and “res binder” for resumption PSKs (those provisioned as the resumption master secret of a previous handshake). The different labels prevent the substitution of one type of PSK for the other.

There are multiple potential Early Secret values depending on which PSK the server ultimately selects. The client will need to compute one for each potential PSK; if no PSK is selected, it will then need to compute the early secret corresponding to the zero PSK.

Once all the values which are to be derived from a given secret have been computed, that secret SHOULD be erased.

Once the handshake is complete, it is possible for either side to update its sending traffic keys using the KeyUpdate handshake message defined in Section 4.6.3. The next generation of traffic keys is computed by generating client_/server_application_traffic_secret_N+1 from client_/server_application_traffic_secret_N as described in this section then re-deriving the traffic keys as described in Section 7.3.

The next-generation application_traffic_secret is computed as:

    application_traffic_secret_N+1 =
        HKDF-Expand-Label(application_traffic_secret_N,
                          "traffic upd", "", Hash.length)

Once client/server_application_traffic_secret_N+1 and its associated traffic keys have been computed, implementations SHOULD delete client_/server_application_traffic_secret_N and its associated traffic keys.

The traffic keying material is generated from the following input values:

  • A secret value
  • A purpose value indicating the specific value being generated
  • The length of the key being generated

The traffic keying material is generated from an input traffic secret value using:

    [sender]_write_key = HKDF-Expand-Label(Secret, "key", "", key_length)
    [sender]_write_iv  = HKDF-Expand-Label(Secret, "iv" , "", iv_length)

[sender] denotes the sending side. The Secret value for each record type is shown in the table below.

Record TypeSecret
0-RTT Applicationclient_early_traffic_secret
Handshake[sender]_handshake_traffic_secret
Application Data[sender]_application_traffic_secret_N

All the traffic keying material is recomputed whenever the underlying Secret changes (e.g., when changing from the handshake to application data keys or upon a key update).

7.4.1.Finite Field Diffie-Hellman

For finite field groups, a conventional Diffie-Hellman [DH76] computation is performed. The negotiated key (Z) is converted to a byte string by encoding in big-endian and left padded with zeros up to the size of the prime. This byte string is used as the shared secret in the key schedule as specified above.

Note that this construction differs from previous versions of TLS which remove leading zeros.

7.4.2.Elliptic Curve Diffie-Hellman

For secp256r1, secp384r1 and secp521r1, ECDH calculations (including parameter and key generation as well as the shared secret calculation) are performed according to [IEEE1363] using the ECKAS-DH1 scheme with the identity map as key derivation function (KDF), so that the shared secret is the x-coordinate of the ECDH shared secret elliptic curve point represented as an octet string. Note that this octet string (Z in IEEE 1363 terminology) as output by FE2OSP, the Field Element to Octet String Conversion Primitive, has constant length for any given field; leading zeros found in this octet string MUST NOT be truncated.

(Note that this use of the identity KDF is a technicality. The complete picture is that ECDH is employed with a non-trivial KDF because TLS does not directly use this secret for anything other than for computing other secrets.)

ECDH functions are used as follows:

  • The public key to put into the KeyShareEntry.key_exchange structure is the result of applying the ECDH scalar multiplication function to the secret key of appropriate length (into scalar input) and the standard public basepoint (into u-coordinate point input).
  • The ECDH shared secret is the result of applying the ECDH scalar multiplication function to the secret key (into scalar input) and the peer’s public key (into u-coordinate point input). The output is used raw, with no processing.

For X25519 and X448, implementations SHOULD use the approach specified in [RFC7748] to calculate the Diffie-Hellman shared secret. Implementations MUST check whether the computed Diffie-Hellman shared secret is the all-zero value and abort if so, as described in Section 6 of [RFC7748]. If implementors use an alternative implementation of these elliptic curves, they SHOULD perform the additional checks specified in Section 7 of [RFC7748].

[RFC5705] defines keying material exporters for TLS in terms of the TLS pseudorandom function (PRF). This document replaces the PRF with HKDF, thus requiring a new construction. The exporter interface remains the same.

The exporter value is computed as:

TLS-Exporter(label, context_value, key_length) =
    HKDF-Expand-Label(Derive-Secret(Secret, label, ""),
                      "exporter", Hash(context_value), key_length)

Where Secret is either the early_exporter_master_secret or the exporter_master_secret. Implementations MUST use the exporter_master_secret unless explicitly specified by the application. The early_exporter_master_secret is defined for use in settings where an exporter is needed for 0-RTT data. A separate interface for the early exporter is RECOMMENDED; this avoids the exporter user accidentally using an early exporter when a regular one is desired or vice versa.

If no context is provided, the context_value is zero-length. Consequently, providing no context computes the same value as providing an empty context. This is a change from previous versions of TLS where an empty context produced a different output to an absent context. As of this document’s publication, no allocated exporter label is used both with and without a context. Future specifications MUST NOT define a use of exporters that permit both an empty context and no context with the same label. New uses of exporters SHOULD provide a context in all exporter computations, though the value could be empty.

Requirements for the format of exporter labels are defined in section 4 of [RFC5705].

As noted in Section 2.3 and Appendix E.5, TLS does not provide inherent replay protections for 0-RTT data. There are two potential threats to be concerned with:

  • Network attackers who mount a replay attack by simply duplicating a flight of 0-RTT data.
  • Network attackers who take advantage of client retry behavior to arrange for the server to receive multiple copies of an application message. This threat already exists to some extent because clients that value robustness respond to network errors by attempting to retry requests. However, 0-RTT adds an additional dimension for any server system which does not maintain globally consistent server state. Specifically, if a server system has multiple zones where tickets from zone A will not be accepted in zone B, then an attacker can duplicate a ClientHello and early data intended for A to both A and B. At A, the data will be accepted in 0-RTT, but at B the server will reject 0-RTT data and instead force a full handshake. If the attacker blocks the ServerHello from A, then the client will complete the handshake with B and probably retry the request, leading to duplication on the server system as a whole.

The first class of attack can be prevented by sharing state to guarantee that the 0-RTT data is accepted at most once. Servers SHOULD provide that level of replay safety, by implementing one of the methods described in this section or by equivalent means. It is understood, however, that due to operational concerns not all deployments will maintain state at that level. Therefore, in normal operation, clients will not know which, if any, of these mechanisms servers actually implement and hence MUST only send early data which they deem safe to be replayed.

In addition to the direct effects of replays, there is a class of attacks where even operations normally considered idempotent could be exploited by a large number of replays (timing attacks, resource limit exhaustion and others described in Appendix E.5). Those can be mitigated by ensuring that every 0-RTT payload can be replayed only a limited number of times. The server MUST ensure that any instance of it (be it a machine, a thread or any other entity within the relevant serving infrastructure) would accept 0-RTT for the same 0-RTT handshake at most once; this limits the number of replays to the number of server instances in the deployment. Such a guarantee can be accomplished by locally recording data from recently-received ClientHellos and rejecting repeats, or by any other method that provides the same or a stronger guarantee. The “at most once per server instance” guarantee is a minimum requirement; servers SHOULD limit 0-RTT replays further when feasible.

The second class of attack cannot be prevented at the TLS layer and MUST be dealt with by any application. Note that any application whose clients implement any kind of retry behavior already needs to implement some sort of anti-replay defense.

The simplest form of anti-replay defense is for the server to only allow each session ticket to be used once. For instance, the server can maintain a database of all outstanding valid tickets; deleting each ticket from the database as it is used. If an unknown ticket is provided, the server would then fall back to a full handshake.

If the tickets are not self-contained but rather are database keys, and the corresponding PSKs are deleted upon use, then connections established using PSKs enjoy forward secrecy. This improves security for all 0-RTT data and PSK usage when PSK is used without (EC)DHE.

Because this mechanism requires sharing the session database between server nodes in environments with multiple distributed servers, it may be hard to achieve high rates of successful PSK 0-RTT connections when compared to self-encrypted tickets. Unlike session databases, session tickets can successfully do PSK-based session establishment even without consistent storage, though when 0-RTT is allowed they still require consistent storage for anti-replay of 0-RTT data, as detailed in the following section.

An alternative form of anti-replay is to record a unique value derived from the ClientHello (generally either the random value or the PSK binder) and reject duplicates. Recording all ClientHellos causes state to grow without bound, but a server can instead record ClientHellos within a given time window and use the “obfuscated_ticket_age” to ensure that tickets aren’t reused outside that window.

In order to implement this, when a ClientHello is received, the server first verifies the PSK binder as described Section 4.2.11. It then computes the expected_arrival_time as described in the next section and rejects 0-RTT if it is outside the recording window, falling back to the 1-RTT handshake.

If the expected arrival time is in the window, then the server checks to see if it has recorded a matching ClientHello. If one is found, it either aborts the handshake with an “illegal_parameter” alert or accepts the PSK but reject 0-RTT. If no matching ClientHello is found, then it accepts 0-RTT and then stores the ClientHello for as long as the expected_arrival_time is inside the window. Servers MAY also implement data stores with false positives, such as Bloom filters, in which case they MUST respond to apparent replay by rejecting 0-RTT but MUST NOT abort the handshake.

The server MUST derive the storage key only from validated sections of the ClientHello. If the ClientHello contains multiple PSK identities, then an attacker can create multiple ClientHellos with different binder values for the less-preferred identity on the assumption that the server will not verify it, as recommended by Section 4.2.11. I.e., if the client sends PSKs A and B but the server prefers A, then the attacker can change the binder for B without affecting the binder for A. If the binder for B is part of the storage key, then this ClientHello will not appear as a duplicate, which will cause the ClientHello to be accepted, and may cause side effects such as replay cache pollution, although any 0-RTT data will not be decryptable because it will use different keys. If the validated binder or the ClientHello.random are used as the storage key, then this attack is not possible.

Because this mechanism does not require storing all outstanding tickets, it may be easier to implement in distributed systems with high rates of resumption and 0-RTT, at the cost of potentially weaker anti-replay defense because of the difficulty of reliably storing and retrieving the received ClientHello messages. In many such systems, it is impractical to have globally consistent storage of all the received ClientHellos. In this case, the best anti-replay protection is provided by having a single storage zone be authoritative for a given ticket and refusing 0-RTT for that ticket in any other zone. This approach prevents simple replay by the attacker because only one zone will accept 0-RTT data. A weaker design is to implement separate storage for each zone but allow 0-RTT in any zone. This approach limits the number of replays to once per zone. Application message duplication of course remains possible with either design.

When implementations are freshly started, they SHOULD reject 0-RTT as long as any portion of their recording window overlaps the startup time. Otherwise, they run the risk of accepting replays which were originally sent during that period.

Note: If the client’s clock is running much faster than the server’s then a ClientHello may be received that is outside the window in the future, in which case it might be accepted for 1-RTT, causing a client retry, and then acceptable later for 0-RTT. This is another variant of the second form of attack described above.

Because the ClientHello indicates the time at which the client sent it, it is possible to efficiently determine whether a ClientHello was likely sent reasonably recently and only accept 0-RTT for such a ClientHello, otherwise falling back to a 1-RTT handshake. This is necessary for the ClientHello storage mechanism described in Section 8.2 because otherwise the server needs to store an unlimited number of ClientHellos and is a useful optimization for self-contained single-use tickets because it allows efficient rejection of ClientHellos which cannot be used for 0-RTT.

In order to implement this mechanism, a server needs to store the time that the server generated the session ticket, offset by an estimate of the round trip time between client and server. I.e.,

    adjusted_creation_time = creation_time + estimated_RTT

This value can be encoded in the ticket, thus avoiding the need to keep state for each outstanding ticket. The server can determine the client’s view of the age of the ticket by subtracting the ticket’s “ticket_age_add value” from the “obfuscated_ticket_age” parameter in the client’s “pre_shared_key” extension. The server can determine the “expected arrival time” of the ClientHello as:

    expected_arrival_time = adjusted_creation_time + clients_ticket_age

When a new ClientHello is received, the expected_arrival_time is then compared against the current server wall clock time and if they differ by more than a certain amount, 0-RTT is rejected, though the 1-RTT handshake can be allowed to complete.

There are several potential sources of error that might cause mismatches between the expected arrival time and the measured time. Variations in client and server clock rates are likely to be minimal, though potentially the absolute times may be off by large values. Network propagation delays are the most likely causes of a mismatch in legitimate values for elapsed time. Both the NewSessionTicket and ClientHello messages might be retransmitted and therefore delayed, which might be hidden by TCP. For clients on the Internet, this implies windows on the order of ten seconds to account for errors in clocks and variations in measurements; other deployment scenarios may have different needs. Clock skew distributions are not symmetric, so the optimal tradeoff may involve an asymmetric range of permissible mismatch values.

Note that freshness checking alone is not sufficient to prevent replays because it does not detect them during the error window, which, depending on bandwidth and system capacity could include billions of replays in real-world settings. In addition, this freshness checking is only done at the time the ClientHello is received, and not when later early application data records are received. After early data is accepted, records may continue to be streamed to the server over a longer time period.

In the absence of an application profile standard specifying otherwise, a TLS-compliant application MUST implement the TLS_AES_128_GCM_SHA256 [GCM] cipher suite and SHOULD implement the TLS_AES_256_GCM_SHA384 [GCM] and TLS_CHACHA20_POLY1305_SHA256 [RFC7539] cipher suites. (see Appendix B.4)

A TLS-compliant application MUST support digital signatures with rsa_pkcs1_sha256 (for certificates), rsa_pss_rsae_sha256 (for CertificateVerify and certificates), and ecdsa_secp256r1_sha256. A TLS-compliant application MUST support key exchange with secp256r1 (NIST P-256) and SHOULD support key exchange with X25519 [RFC7748].

In the absence of an application profile standard specifying otherwise, a TLS-compliant application MUST implement the following TLS extensions:

All implementations MUST send and use these extensions when offering applicable features:

  • “supported_versions” is REQUIRED for all ClientHello, ServerHello and HelloRetryRequest messages.
  • “signature_algorithms” is REQUIRED for certificate authentication.
  • “supported_groups” is REQUIRED for ClientHello messages using DHE or ECDHE key exchange.
  • “key_share” is REQUIRED for DHE or ECDHE key exchange.
  • “pre_shared_key” is REQUIRED for PSK key agreement.
  • “psk_key_exchange_modes” is REQUIRED for PSK key agreement.

A client is considered to be attempting to negotiate using this specification if the ClientHello contains a “supported_versions” extension with 0x0304 contained in its body. Such a ClientHello message MUST meet the following requirements:

  • If not containing a “pre_shared_key” extension, it MUST contain both a “signature_algorithms” extension and a “supported_groups” extension.
  • If containing a “supported_groups” extension, it MUST also contain a “key_share” extension, and vice versa. An empty KeyShare.client_shares vector is permitted.

Servers receiving a ClientHello which does not conform to these requirements MUST abort the handshake with a “missing_extension” alert.

Additionally, all implementations MUST support use of the “server_name” extension with applications capable of using it. Servers MAY require clients to send a valid “server_name” extension. Servers requiring this extension SHOULD respond to a ClientHello lacking a “server_name” extension by terminating the connection with a “missing_extension” alert.

This section describes invariants that TLS endpoints and middleboxes MUST follow. It also applies to earlier versions of TLS.

TLS is designed to be securely and compatibly extensible. Newer clients or servers, when communicating with newer peers, should negotiate the most preferred common parameters. The TLS handshake provides downgrade protection: Middleboxes passing traffic between a newer client and newer server without terminating TLS should be unable to influence the handshake (see Appendix E.1). At the same time, deployments update at different rates, so a newer client or server MAY continue to support older parameters, which would allow it to interoperate with older endpoints.

For this to work, implementations MUST correctly handle extensible fields:

  • A client sending a ClientHello MUST support all parameters advertised in it. Otherwise, the server may fail to interoperate by selecting one of those parameters.
  • A server receiving a ClientHello MUST correctly ignore all unrecognized cipher suites, extensions, and other parameters. Otherwise, it may fail to interoperate with newer clients. In TLS 1.3, a client receiving a CertificateRequest or NewSessionTicket MUST also ignore all unrecognized extensions.
  • A middlebox which terminates a TLS connection MUST behave as a compliant TLS server (to the original client), including having a certificate which the client is willing to accept, and as a compliant TLS client (to the original server), including verifying the original server’s certificate. In particular, it MUST generate its own ClientHello containing only parameters it understands, and it MUST generate a fresh ServerHello random value, rather than forwarding the endpoint’s value.

    Note that TLS’s protocol requirements and security analysis only apply to the two connections separately. Safely deploying a TLS terminator requires additional security considerations which are beyond the scope of this document.

  • An middlebox which forwards ClientHello parameters it does not understand MUST NOT process any messages beyond that ClientHello. It MUST forward all subsequent traffic unmodified. Otherwise, it may fail to interoperate with newer clients and servers.

    Forwarded ClientHellos may contain advertisements for features not supported by the middlebox, so the response may include future TLS additions the middlebox does not recognize. These additions MAY change any message beyond the ClientHello arbitrarily. In particular, the values sent in the ServerHello might change, the ServerHello format might change, and the TLSCiphertext format might change.

The design of TLS 1.3 was constrained by widely-deployed non-compliant TLS middleboxes (see Appendix D.4), however it does not relax the invariants. Those middleboxes continue to be non-compliant.

Security issues are discussed throughout this memo, especially in Appendix C, Appendix D, and Appendix E.

This document uses several registries that were originally created in [RFC4346]. IANA [SHALL update/has updated] these to reference this document. The registries and their allocation policies are below:

  • TLS Cipher Suite Registry: values with the first byte in the range 0-254 (decimal) are assigned via Specification Required [RFC8126]. Values with the first byte 255 (decimal) are reserved for Private Use [RFC8126].

    IANA [SHALL add/has added] the cipher suites listed in Appendix B.4 to the registry. The “Value” and “Description” columns are taken from the table. The “DTLS-OK” and “Recommended” columns are both marked as “Yes” for each new cipher suite. [[This assumes [I-D.ietf-tls-iana-registry-updates] has been applied.]]

  • TLS ContentType Registry: Future values are allocated via Standards Action [RFC8126].
  • TLS Alert Registry: Future values are allocated via Standards Action [RFC8126]. IANA [SHALL update/has updated] this registry to include values for “missing_extension” and “certificate_required”. The “DTLS-OK” column is marked as “Yes” for each new alert.
  • TLS HandshakeType Registry: Future values are allocated via Standards Action [RFC8126]. IANA [SHALL update/has updated] this registry to rename item 4 from “NewSessionTicket” to “new_session_ticket” and to add the “hello_retry_request_RESERVED”, “encrypted_extensions”, “end_of_early_data”, “key_update”, and “message_hash” values. The “DTLS-OK” are marked as “Yes” for each of these additions.

This document also uses the TLS ExtensionType Registry originally created in [RFC4366]. IANA has updated it to reference this document. Changes to the registry follow:

In addition, this document defines two new registries to be maintained by IANA:

  • TLS SignatureScheme Registry: Values with the first byte in the range 0-253 (decimal) are assigned via Specification Required [RFC8126]. Values with the first byte 254 or 255 (decimal) are reserved for Private Use [RFC8126]. Values with the first byte in the range 0-6 or with the second byte in the range 0-3 that are not currently allocated are reserved for backwards compatibility. This registry SHALL have a “Recommended” column. The registry [shall be/ has been] initially populated with the values described in Section 4.2.3. The following values SHALL be marked as “Recommended”: ecdsa_secp256r1_sha256, ecdsa_secp384r1_sha384, rsa_pss_rsae_sha256, rsa_pss_rsae_sha384, rsa_pss_rsae_sha512, rsa_pss_pss_sha256, rsa_pss_pss_sha384, rsa_pss_pss_sha512, and ed25519.
  • TLS PskKeyExchangeMode Registry: Values in the range 0-253 (decimal) are assigned via Specification Required [RFC8126]. Values with the first byte 254 or 255 (decimal) are reserved for Private Use [RFC8126]. This registry SHALL have a “Recommended” column. The registry [shall be/ has been] initially populated psk_ke (0) and psk_dhe_ke (1). Both SHALL be marked as “Recommended”.

12.1. Normative References

[DH]Diffie, W. and M. Hellman, "New Directions in Cryptography", IEEE Transactions on Information Theory, V.IT-22 n.6 , June 1977.
[DH76]Diffie, W. and M. Hellman, "New directions in cryptography", IEEE Transactions on Information Theory Vol. 22, pp. 644-654, DOI 10.1109/tit.1976.1055638, November 1976.
[GCM]Dworkin, M., "Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC", NIST Special Publication 800-38D, November 2007.
[RFC2104]Krawczyk, H., Bellare, M. and R. Canetti, "HMAC: Keyed-Hashing for Message Authentication", RFC 2104, DOI 10.17487/RFC2104, February 1997.
[RFC2119]Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997.
[RFC5116]McGrew, D., "An Interface and Algorithms for Authenticated Encryption", RFC 5116, DOI 10.17487/RFC5116, January 2008.
[RFC5280]Cooper, D., Santesson, S., Farrell, S., Boeyen, S., Housley, R. and W. Polk, "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008.
[RFC5705]Rescorla, E., "Keying Material Exporters for Transport Layer Security (TLS)", RFC 5705, DOI 10.17487/RFC5705, March 2010.
[RFC5756]Turner, S., Brown, D., Yiu, K., Housley, R. and T. Polk, "Updates for RSAES-OAEP and RSASSA-PSS Algorithm Parameters", RFC 5756, DOI 10.17487/RFC5756, January 2010.
[RFC5869]Krawczyk, H. and P. Eronen, "HMAC-based Extract-and-Expand Key Derivation Function (HKDF)", RFC 5869, DOI 10.17487/RFC5869, May 2010.
[RFC6066]Eastlake 3rd, D., "Transport Layer Security (TLS) Extensions: Extension Definitions", RFC 6066, DOI 10.17487/RFC6066, January 2011.
[RFC6655]McGrew, D. and D. Bailey, "AES-CCM Cipher Suites for Transport Layer Security (TLS)", RFC 6655, DOI 10.17487/RFC6655, July 2012.
[RFC6960]Santesson, S., Myers, M., Ankney, R., Malpani, A., Galperin, S. and C. Adams, "X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP", RFC 6960, DOI 10.17487/RFC6960, June 2013.
[RFC6961]Pettersen, Y., "The Transport Layer Security (TLS) Multiple Certificate Status Request Extension", RFC 6961, DOI 10.17487/RFC6961, June 2013.
[RFC6962]Laurie, B., Langley, A. and E. Kasper, "Certificate Transparency", RFC 6962, DOI 10.17487/RFC6962, June 2013.
[RFC6979]Pornin, T., "Deterministic Usage of the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA)", RFC 6979, DOI 10.17487/RFC6979, August 2013.
[RFC7301]Friedl, S., Popov, A., Langley, A. and E. Stephan, "Transport Layer Security (TLS) Application-Layer Protocol Negotiation Extension", RFC 7301, DOI 10.17487/RFC7301, July 2014.
[RFC7507]Moeller, B. and A. Langley, "TLS Fallback Signaling Cipher Suite Value (SCSV) for Preventing Protocol Downgrade Attacks", RFC 7507, DOI 10.17487/RFC7507, April 2015.
[RFC7539]Nir, Y. and A. Langley, "ChaCha20 and Poly1305 for IETF Protocols", RFC 7539, DOI 10.17487/RFC7539, May 2015.
[RFC7748]Langley, A., Hamburg, M. and S. Turner, "Elliptic Curves for Security", RFC 7748, DOI 10.17487/RFC7748, January 2016.
[RFC7919]Gillmor, D., "Negotiated Finite Field Diffie-Hellman Ephemeral Parameters for Transport Layer Security (TLS)", RFC 7919, DOI 10.17487/RFC7919, August 2016.
[RFC8017]Moriarty, K., Kaliski, B., Jonsson, J. and A. Rusch, "PKCS #1: RSA Cryptography Specifications Version 2.2", RFC 8017, DOI 10.17487/RFC8017, November 2016.
[RFC8032]Josefsson, S. and I. Liusvaara, "Edwards-Curve Digital Signature Algorithm (EdDSA)", RFC 8032, DOI 10.17487/RFC8032, January 2017.
[RFC8126]Cotton, M., Leiba, B. and T. Narten, "Guidelines for Writing an IANA Considerations Section in RFCs", BCP 26, RFC 8126, DOI 10.17487/RFC8126, June 2017.
[RFC8174]Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, May 2017.
[SHS]Dang, Q., "Secure Hash Standard", National Institute of Standards and Technology report, DOI 10.6028/nist.fips.180-4, July 2015.
[X690]ITU-T, "Information technology - ASN.1 encoding Rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER)", ISO/IEC 8825-1:2002, 2002.
[X962]ANSI, "Public Key Cryptography For The Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA)", ANSI X9.62, 1998.

12.2. Informative References

[AEAD-LIMITS]Luykx, A. and K. Paterson, "Limits on Authenticated Encryption Use in TLS", 2016.
[Anon18]Anonymous, A., "Secure Channels for Multiplexed Data Streams: Analyzing the TLS 1.3 Record Layer Without Elision", In submission to CRYPTO 2018. RFC EDITOR: PLEASE UPDATE THIS REFERENCE AFTER FINAL NOTIFICATION (2018-4-29). , 2018.
[BBFKZG16]Bhargavan, K., Brzuska, C., Fournet, C., Kohlweiss, M., Zanella-Beguelin, S. and M. Green, "Downgrade Resilience in Key-Exchange Protocols", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2016 , 2016.
[BBK17]Bhargavan, K., Blanchet, B. and N. Kobeissi, "Verified Models and Reference Implementations for the TLS 1.3 Standard Candidate", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2017 , 2017.
[BDFKPPRSZZ16]Bhargavan, K., Delignat-Lavaud, A., Fournet, C., Kohlweiss, M., Pan, J., Protzenko, J., Rastogi, A., Swamy, N., Zanella-Beguelin, S. and J. Zinzindohoue, "Implementing and Proving the TLS 1.3 Record Layer", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2017 , December 2016.
[Ben17a]Benjamin, D., "Presentation before the TLS WG at IETF 100", 2017.
[Ben17b]Benjamin, D., "Additional TLS 1.3 results from Chrome", 2017.
[BMMT15]Badertscher, C., Matt, C., Maurer, U. and B. Tackmann, "Augmented Secure Channels and the Goal of the TLS 1.3 Record Layer", ProvSec 2015 , September 2015.
[BT16]Bellare, M. and B. Tackmann, "The Multi-User Security of Authenticated Encryption: AES-GCM in TLS 1.3", Proceedings of CRYPTO 2016 , 2016.
[CCG16]Cohn-Gordon, K., Cremers, C. and L. Garratt, "On Post-Compromise Security", IEEE Computer Security Foundations Symposium , 2015.
[CHECKOWAY]Checkoway, S., Shacham, H., Maskiewicz, J., Garman, C., Fried, J., Cohney, S., Green, M., Heninger, N., Weinmann, R. and E. Rescorla, "A Systematic Analysis of the Juniper Dual EC Incident", Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS'16, DOI 10.1145/2976749.2978395, 2016.
[CHHSV17]Cremers, C., Horvat, M., Hoyland, J., van der Merwe, T. and S. Scott, "Awkward Handshake: Possible mismatch of client/server view on client authentication in post-handshake mode in Revision 18", 2017.
[CHSV16]Cremers, C., Horvat, M., Scott, S. and T. van der Merwe, "Automated Analysis and Verification of TLS 1.3: 0-RTT, Resumption and Delayed Authentication", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2016 , 2016.
[CK01]Canetti, R. and H. Krawczyk, "Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels", Proceedings of Eurocrypt 2001 , 2001.
[CLINIC]Miller, B., Huang, L., Joseph, A. and J. Tygar, "I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis", Privacy Enhancing Technologies pp. 143-163, DOI 10.1007/978-3-319-08506-7_8, 2014.
[DFGS15]Dowling, B., Fischlin, M., Guenther, F. and D. Stebila, "A Cryptographic Analysis of the TLS 1.3 draft-10 Full and Pre-shared Key Handshake Protocol", Proceedings of ACM CCS 2015 , 2015.
[DFGS16]Dowling, B., Fischlin, M., Guenther, F. and D. Stebila, "A Cryptographic Analysis of the TLS 1.3 draft-10 Full and Pre-shared Key Handshake Protocol", TRON 2016 , 2016.
[DOW92]Diffie, W., van Oorschot, P. and M. Wiener, "Authentication and authenticated key exchanges", Designs, Codes and Cryptography , 1992.
[DSS]National Institute of Standards and Technology, U.S. Department of Commerce, "Digital Signature Standard, version 4", NIST FIPS PUB 186-4, 2013.
[ECDSA]American National Standards Institute, "Public Key Cryptography for the Financial Services Industry: The Elliptic Curve Digital Signature Algorithm (ECDSA)", ANSI ANS X9.62-2005, November 2005.
[FG17]Fischlin, M. and F. Guenther, "Replay Attacks on Zero Round-Trip Time: The Case of the TLS 1.3 Handshake Candidates", Proceedings of Euro S"P 2017 , 2017.
[FGSW16]Fischlin, M., Guenther, F., Schmidt, B. and B. Warinschi, "Key Confirmation in Key Exchange: A Formal Treatment and Implications for TLS 1.3", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2016 , 2016.
[FW15]Florian Weimer, ., "Factoring RSA Keys With TLS Perfect Forward Secrecy", September 2015.
[HCJ16]Husák, M., Čermák, M., Jirsík, T. and P. Čeleda, "HTTPS traffic analysis and client identification using passive SSL/TLS fingerprinting", EURASIP Journal on Information Security Vol. 2016, DOI 10.1186/s13635-016-0030-7, February 2016.
[HGFS15]Hlauschek, C., Gruber, M., Fankhauser, F. and C. Schanes, "Prying Open Pandora's Box: KCI Attacks against TLS", Proceedings of USENIX Workshop on Offensive Technologies , 2015.
[I-D.ietf-tls-iana-registry-updates]Salowey, J. and S. Turner, "IANA Registry Updates for TLS and DTLS", Internet-Draft draft-ietf-tls-iana-registry-updates-04, February 2018.
[I-D.ietf-tls-tls13-vectors]Thomson, M., "Example Handshake Traces for TLS 1.3", Internet-Draft draft-ietf-tls-tls13-vectors-03, December 2017.
[IEEE1363]IEEE, "Standard Specifications for Public Key Cryptography", IEEE 1363 , 2000.
[JSS15]Jager, T., Schwenk, J. and J. Somorovsky, "On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption", Proceedings of ACM CCS 2015 , 2015.
[KEYAGREEMENT]Barker, E., Chen, L., Roginsky, A. and M. Smid, "Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography", National Institute of Standards and Technology report, DOI 10.6028/nist.sp.800-56ar2, May 2013.
[Kraw10]Krawczyk, H., "Cryptographic Extraction and Key Derivation: The HKDF Scheme", Proceedings of CRYPTO 2010 , 2010.
[Kraw16]Krawczyk, H., "A Unilateral-to-Mutual Authentication Compiler for Key Exchange (with Applications to Client Authentication in TLS 1.3", Proceedings of ACM CCS 2016 , 2016.
[KW16]Krawczyk, H. and H. Wee, "The OPTLS Protocol and TLS 1.3", Proceedings of Euro S"P 2016 , 2016.
[LXZFH16]Li, X., Xu, J., Feng, D., Zhang, Z. and H. Hu, "Multiple Handshakes Security of TLS 1.3 Candidates", Proceedings of IEEE Symposium on Security and Privacy (Oakland) 2016 , 2016.
[Mac17]MacCarthaigh, C., "Security Review of TLS1.3 0-RTT", 2017.
[PSK-FINISHED]Cremers, C., Horvat, M., van der Merwe, T. and S. Scott, "Revision 10: possible attack if client authentication is allowed during PSK", 2015.
[REKEY]Abdalla, M. and M. Bellare, "Increasing the Lifetime of a Key: A Comparative Analysis of the Security of Re-keying Techniques", ASIACRYPT2000 , October 2000.
[Res17a]Rescorla, E., "Preliminary data on Firefox TLS 1.3 Middlebox experiment", 2017.
[Res17b]Rescorla, E., "More compatibility measurement results", 2017.
[RFC3552]Rescorla, E. and B. Korver, "Guidelines for Writing RFC Text on Security Considerations", BCP 72, RFC 3552, DOI 10.17487/RFC3552, July 2003.
[RFC4086]Eastlake 3rd, D., Schiller, J. and S. Crocker, "Randomness Requirements for Security", BCP 106, RFC 4086, DOI 10.17487/RFC4086, June 2005.
[RFC4346]Dierks, T. and E. Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.1", RFC 4346, DOI 10.17487/RFC4346, April 2006.
[RFC4366]Blake-Wilson, S., Nystrom, M., Hopwood, D., Mikkelsen, J. and T. Wright, "Transport Layer Security (TLS) Extensions", RFC 4366, DOI 10.17487/RFC4366, April 2006.
[RFC4492]Blake-Wilson, S., Bolyard, N., Gupta, V., Hawk, C. and B. Moeller, "Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS)", RFC 4492, DOI 10.17487/RFC4492, May 2006.
[RFC5077]Salowey, J., Zhou, H., Eronen, P. and H. Tschofenig, "Transport Layer Security (TLS) Session Resumption without Server-Side State", RFC 5077, DOI 10.17487/RFC5077, January 2008.
[RFC5246]Dierks, T. and E. Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.2", RFC 5246, DOI 10.17487/RFC5246, August 2008.
[RFC5764]McGrew, D. and E. Rescorla, "Datagram Transport Layer Security (DTLS) Extension to Establish Keys for the Secure Real-time Transport Protocol (SRTP)", RFC 5764, DOI 10.17487/RFC5764, May 2010.
[RFC5929]Altman, J., Williams, N. and L. Zhu, "Channel Bindings for TLS", RFC 5929, DOI 10.17487/RFC5929, July 2010.
[RFC6091]Mavrogiannopoulos, N. and D. Gillmor, "Using OpenPGP Keys for Transport Layer Security (TLS) Authentication", RFC 6091, DOI 10.17487/RFC6091, February 2011.
[RFC6176]Turner, S. and T. Polk, "Prohibiting Secure Sockets Layer (SSL) Version 2.0", RFC 6176, DOI 10.17487/RFC6176, March 2011.
[RFC6347]Rescorla, E. and N. Modadugu, "Datagram Transport Layer Security Version 1.2", RFC 6347, DOI 10.17487/RFC6347, January 2012.
[RFC6520]Seggelmann, R., Tuexen, M. and M. Williams, "Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS) Heartbeat Extension", RFC 6520, DOI 10.17487/RFC6520, February 2012.
[RFC7230]Fielding, R. and J. Reschke, "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing", RFC 7230, DOI 10.17487/RFC7230, June 2014.
[RFC7250]Wouters, P., Tschofenig, H., Gilmore, J., Weiler, S. and T. Kivinen, "Using Raw Public Keys in Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)", RFC 7250, DOI 10.17487/RFC7250, June 2014.
[RFC7465]Popov, A., "Prohibiting RC4 Cipher Suites", RFC 7465, DOI 10.17487/RFC7465, February 2015.
[RFC7568]Barnes, R., Thomson, M., Pironti, A. and A. Langley, "Deprecating Secure Sockets Layer Version 3.0", RFC 7568, DOI 10.17487/RFC7568, June 2015.
[RFC7627]Bhargavan, K., Delignat-Lavaud, A., Pironti, A., Langley, A. and M. Ray, "Transport Layer Security (TLS) Session Hash and Extended Master Secret Extension", RFC 7627, DOI 10.17487/RFC7627, September 2015.
[RFC7685]Langley, A., "A Transport Layer Security (TLS) ClientHello Padding Extension", RFC 7685, DOI 10.17487/RFC7685, October 2015.
[RFC7924]Santesson, S. and H. Tschofenig, "Transport Layer Security (TLS) Cached Information Extension", RFC 7924, DOI 10.17487/RFC7924, July 2016.
[RFC8305]Schinazi, D. and T. Pauly, "Happy Eyeballs Version 2: Better Connectivity Using Concurrency", RFC 8305, DOI 10.17487/RFC8305, December 2017.
[RSA]Rivest, R., Shamir, A. and L. Adleman, "A Method for Obtaining Digital Signatures and Public-Key Cryptosystems", Communications of the ACM v. 21, n. 2, pp. 120-126., February 1978.
[SIGMA]Krawczyk, H., "SIGMA: the 'SIGn-and-MAc' approach to authenticated Diffie-Hellman and its use in the IKE protocols", Proceedings of CRYPTO 2003 , 2003.
[SLOTH]Bhargavan, K. and G. Leurent, "Transcript Collision Attacks: Breaking Authentication in TLS, IKE, and SSH", Network and Distributed System Security Symposium (NDSS 2016) , 2016.
[SSL2]Hickman, K., "The SSL Protocol", February 1995.
[SSL3]Freier, A., Karlton, P. and P. Kocher, "The SSL 3.0 Protocol", November 1996.
[TIMING]Boneh, D. and D. Brumley, "Remote timing attacks are practical", USENIX Security Symposium, 2003.
[X501]"Information Technology - Open Systems Interconnection - The Directory: Models", ITU-T X.501, 1993.

This section provides a summary of the legal state transitions for the client and server handshakes. State names (in all capitals, e.g., START) have no formal meaning but are provided for ease of comprehension. Actions which are taken only in certain circumstances are indicated in []. The notation “K_{send,recv} = foo” means “set the send/recv key to the given key”.

                           START <----+
            Send ClientHello |        | Recv HelloRetryRequest
       [K_send = early data] |        |
                             v        |
        /                 WAIT_SH ----+
        |                    | Recv ServerHello
        |                    | K_recv = handshake
    Can |                    V
   send |                 WAIT_EE
  early |                    | Recv EncryptedExtensions
   data |           +--------+--------+
        |     Using |                 | Using certificate
        |       PSK |                 v
        |           |            WAIT_CERT_CR
        |           |        Recv |       | Recv CertificateRequest
        |           | Certificate |       v
        |           |             |    WAIT_CERT
        |           |             |       | Recv Certificate
        |           |             v       v
        |           |              WAIT_CV
        |           |                 | Recv CertificateVerify
        |           +> WAIT_FINISHED <+
        |                  | Recv Finished
        \                  | [Send EndOfEarlyData]
                           | K_send = handshake
                           | [Send Certificate [+ CertificateVerify]]
 Can send                  | Send Finished
 app data   -->            | K_send = K_recv = application
 after here                v
                       CONNECTED

Note that with the transitions as shown above, clients may send alerts that derive from post-ServerHello messages in the clear or with the early data keys. If clients need to send such alerts, they SHOULD first rekey to the handshake keys if possible.

                             START <-----+
              Recv ClientHello |         | Send HelloRetryRequest
                               v         |
                            RECVD_CH ----+
                               | Select parameters
                               v
                            NEGOTIATED
                               | Send ServerHello
                               | K_send = handshake
                               | Send EncryptedExtensions
                               | [Send CertificateRequest]
Can send                       | [Send Certificate + CertificateVerify]
app data                       | Send Finished
after   -->                    | K_send = application
here                  +--------+--------+
             No 0-RTT |                 | 0-RTT
                      |                 |
  K_recv = handshake  |                 | K_recv = early data
[Skip decrypt errors] |    +------> WAIT_EOED -+
                      |    |       Recv |      | Recv EndOfEarlyData
                      |    | early data |      | K_recv = handshake
                      |    +------------+      |
                      |                        |
                      +> WAIT_FLIGHT2 <--------+
                               |
                      +--------+--------+
              No auth |                 | Client auth
                      |                 |
                      |                 v
                      |             WAIT_CERT
                      |        Recv |       | Recv Certificate
                      |       empty |       v
                      | Certificate |    WAIT_CV
                      |             |       | Recv
                      |             v       | CertificateVerify
                      +-> WAIT_FINISHED <---+
                               | Recv Finished
                               | K_recv = application
                               v
                           CONNECTED

This section provides the normative protocol types and constants definitions. Values listed as _RESERVED were used in previous versions of TLS and are listed here for completeness. TLS 1.3 implementations MUST NOT send them but might receive them from older TLS implementations.

   enum {
       invalid(0),
       change_cipher_spec(20),
       alert(21),
       handshake(22),
       application_data(23),
       (255)
   } ContentType;

   struct {
       ContentType type;
       ProtocolVersion legacy_record_version;
       uint16 length;
       opaque fragment[TLSPlaintext.length];
   } TLSPlaintext;

   struct {
       opaque content[TLSPlaintext.length];
       ContentType type;
       uint8 zeros[length_of_padding];
   } TLSInnerPlaintext;

   struct {
       ContentType opaque_type = application_data; /* 23 */
       ProtocolVersion legacy_record_version = 0x0303; /* TLS v1.2 */
       uint16 length;
       opaque encrypted_record[TLSCiphertext.length];
   } TLSCiphertext;
   enum { warning(1), fatal(2), (255) } AlertLevel;

   enum {
       close_notify(0),
       unexpected_message(10),
       bad_record_mac(20),
       decryption_failed_RESERVED(21),
       record_overflow(22),
       decompression_failure_RESERVED(30),
       handshake_failure(40),
       no_certificate_RESERVED(41),
       bad_certificate(42),
       unsupported_certificate(43),
       certificate_revoked(44),
       certificate_expired(45),
       certificate_unknown(46),
       illegal_parameter(47),
       unknown_ca(48),
       access_denied(49),
       decode_error(50),
       decrypt_error(51),
       export_restriction_RESERVED(60),
       protocol_version(70),
       insufficient_security(71),
       internal_error(80),
       inappropriate_fallback(86),
       user_canceled(90),
       no_renegotiation_RESERVED(100),
       missing_extension(109),
       unsupported_extension(110),
       certificate_unobtainable_RESERVED(111),
       unrecognized_name(112),
       bad_certificate_status_response(113),
       bad_certificate_hash_value_RESERVED(114),
       unknown_psk_identity(115),
       certificate_required(116),
       no_application_protocol(120),
       (255)
   } AlertDescription;

   struct {
       AlertLevel level;
       AlertDescription description;
   } Alert;
   enum {
       hello_request_RESERVED(0),
       client_hello(1),
       server_hello(2),
       hello_verify_request_RESERVED(3),
       new_session_ticket(4),
       end_of_early_data(5),
       hello_retry_request_RESERVED(6),
       encrypted_extensions(8),
       certificate(11),
       server_key_exchange_RESERVED(12),
       certificate_request(13),
       server_hello_done_RESERVED(14),
       certificate_verify(15),
       client_key_exchange_RESERVED(16),
       finished(20),
       key_update(24),
       message_hash(254),
       (255)
   } HandshakeType;

   struct {
       HandshakeType msg_type;    /* handshake type */
       uint24 length;             /* bytes in message */
       select (Handshake.msg_type) {
           case client_hello:          ClientHello;
           case server_hello:          ServerHello;
           case end_of_early_data:     EndOfEarlyData;
           case encrypted_extensions:  EncryptedExtensions;
           case certificate_request:   CertificateRequest;
           case certificate:           Certificate;
           case certificate_verify:    CertificateVerify;
           case finished:              Finished;
           case new_session_ticket:    NewSessionTicket;
           case key_update:            KeyUpdate;
       };
   } Handshake;

B.3.1.Key Exchange Messages

   uint16 ProtocolVersion;
   opaque Random[32];

   uint8 CipherSuite[2];    /* Cryptographic suite selector */

   struct {
       ProtocolVersion legacy_version = 0x0303;    /* TLS v1.2 */
       Random random;
       opaque legacy_session_id<0..32>;
       CipherSuite cipher_suites<2..2^16-2>;
       opaque legacy_compression_methods<1..2^8-1>;
       Extension extensions<8..2^16-1>;
   } ClientHello;

   struct {
       ProtocolVersion legacy_version = 0x0303;    /* TLS v1.2 */
       Random random;
       opaque legacy_session_id_echo<0..32>;
       CipherSuite cipher_suite;
       uint8 legacy_compression_method = 0;
       Extension extensions<6..2^16-1>;
   } ServerHello;

   struct {
       ExtensionType extension_type;
       opaque extension_data<0..2^16-1>;
   } Extension;

   enum {
       server_name(0),                             /* RFC 6066 */
       max_fragment_length(1),                     /* RFC 6066 */
       status_request(5),                          /* RFC 6066 */
       supported_groups(10),                       /* RFC 4492, 7919 */
       signature_algorithms(13),                   /* [[this document]] */
       use_srtp(14),                               /* RFC 5764 */
       heartbeat(15),                              /* RFC 6520 */
       application_layer_protocol_negotiation(16), /* RFC 7301 */
       signed_certificate_timestamp(18),           /* RFC 6962 */
       client_certificate_type(19),                /* RFC 7250 */
       server_certificate_type(20),                /* RFC 7250 */
       padding(21),                                /* RFC 7685 */
       RESERVED(40),                               /* Used but never assigned */
       pre_shared_key(41),                         /* [[this document]] */
       early_data(42),                             /* [[this document]] */
       supported_versions(43),                     /* [[this document]] */
       cookie(44),                                 /* [[this document]] */
       psk_key_exchange_modes(45),                 /* [[this document]] */
       RESERVED(46),                               /* Used but never assigned */
       certificate_authorities(47),                /* [[this document]] */
       oid_filters(48),                            /* [[this document]] */
       post_handshake_auth(49),                    /* [[this document]] */
       signature_algorithms_cert(50),              /* [[this document]] */
       key_share(51),                              /* [[this document]] */
       (65535)
   } ExtensionType;

   struct {
       NamedGroup group;
       opaque key_exchange<1..2^16-1>;
   } KeyShareEntry;

   struct {
       KeyShareEntry client_shares<0..2^16-1>;
   } KeyShareClientHello;

   struct {
       NamedGroup selected_group;
   } KeyShareHelloRetryRequest;

   struct {
       KeyShareEntry server_share;
   } KeyShareServerHello;

   struct {
       uint8 legacy_form = 4;
       opaque X[coordinate_length];
       opaque Y[coordinate_length];
   } UncompressedPointRepresentation;

   enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;

   struct {
       PskKeyExchangeMode ke_modes<1..255>;
   } PskKeyExchangeModes;

   struct {} Empty;

   struct {
       select (Handshake.msg_type) {
           case new_session_ticket:   uint32 max_early_data_size;
           case client_hello:         Empty;
           case encrypted_extensions: Empty;
       };
   } EarlyDataIndication;

   struct {
       opaque identity<1..2^16-1>;
       uint32 obfuscated_ticket_age;
   } PskIdentity;

   opaque PskBinderEntry<32..255>;

   struct {
       PskIdentity identities<7..2^16-1>;
       PskBinderEntry binders<33..2^16-1>;
   } OfferedPsks;

   struct {
       select (Handshake.msg_type) {
           case client_hello: OfferedPsks;
           case server_hello: uint16 selected_identity;
       };
   } PreSharedKeyExtension;

B.3.1.1.Version Extension

   struct {
       select (Handshake.msg_type) {
           case client_hello:
                ProtocolVersion versions<2..254>;

           case server_hello: /* and HelloRetryRequest */
                ProtocolVersion selected_version;
       };
   } SupportedVersions;

B.3.1.2.Cookie Extension

   struct {
       opaque cookie<1..2^16-1>;
   } Cookie;

B.3.1.3.Signature Algorithm Extension

   enum {
       /* RSASSA-PKCS1-v1_5 algorithms */
       rsa_pkcs1_sha256(0x0401),
       rsa_pkcs1_sha384(0x0501),
       rsa_pkcs1_sha512(0x0601),

       /* ECDSA algorithms */
       ecdsa_secp256r1_sha256(0x0403),
       ecdsa_secp384r1_sha384(0x0503),
       ecdsa_secp521r1_sha512(0x0603),

       /* RSASSA-PSS algorithms with public key OID rsaEncryption */
       rsa_pss_rsae_sha256(0x0804),
       rsa_pss_rsae_sha384(0x0805),
       rsa_pss_rsae_sha512(0x0806),

       /* EdDSA algorithms */
       ed25519(0x0807),
       ed448(0x0808),

       /* RSASSA-PSS algorithms with public key OID RSASSA-PSS */
       rsa_pss_pss_sha256(0x0809),
       rsa_pss_pss_sha384(0x080a),
       rsa_pss_pss_sha512(0x080b),

       /* Legacy algorithms */
       rsa_pkcs1_sha1(0x0201),
       ecdsa_sha1(0x0203),

       /* Reserved Code Points */
       obsolete_RESERVED(0x0000..0x0200),
       dsa_sha1_RESERVED(0x0202),
       obsolete_RESERVED(0x0204..0x0400),
       dsa_sha256_RESERVED(0x0402),
       obsolete_RESERVED(0x0404..0x0500),
       dsa_sha384_RESERVED(0x0502),
       obsolete_RESERVED(0x0504..0x0600),
       dsa_sha512_RESERVED(0x0602),
       obsolete_RESERVED(0x0604..0x06FF),
       private_use(0xFE00..0xFFFF),
       (0xFFFF)
   } SignatureScheme;

   struct {
       SignatureScheme supported_signature_algorithms<2..2^16-2>;
   } SignatureSchemeList;

B.3.1.4.Supported Groups Extension

   enum {
       unallocated_RESERVED(0x0000),

       /* Elliptic Curve Groups (ECDHE) */
       obsolete_RESERVED(0x0001..0x0016),
       secp256r1(0x0017), secp384r1(0x0018), secp521r1(0x0019),
       obsolete_RESERVED(0x001A..0x001C),
       x25519(0x001D), x448(0x001E),

       /* Finite Field Groups (DHE) */
       ffdhe2048(0x0100), ffdhe3072(0x0101), ffdhe4096(0x0102),
       ffdhe6144(0x0103), ffdhe8192(0x0104),

       /* Reserved Code Points */
       ffdhe_private_use(0x01FC..0x01FF),
       ecdhe_private_use(0xFE00..0xFEFF),
       obsolete_RESERVED(0xFF01..0xFF02),
       (0xFFFF)
   } NamedGroup;

   struct {
       NamedGroup named_group_list<2..2^16-1>;
   } NamedGroupList;

Values within “obsolete_RESERVED” ranges are used in previous versions of TLS and MUST NOT be offered or negotiated by TLS 1.3 implementations. The obsolete curves have various known/theoretical weaknesses or have had very little usage, in some cases only due to unintentional server configuration issues. They are no longer considered appropriate for general use and should be assumed to be potentially unsafe. The set of curves specified here is sufficient for interoperability with all currently deployed and properly configured TLS implementations.

B.3.2.Server Parameters Messages

   opaque DistinguishedName<1..2^16-1>;

   struct {
       DistinguishedName authorities<3..2^16-1>;
   } CertificateAuthoritiesExtension;

   struct {
       opaque certificate_extension_oid<1..2^8-1>;
       opaque certificate_extension_values<0..2^16-1>;
   } OIDFilter;

   struct {
       OIDFilter filters<0..2^16-1>;
   } OIDFilterExtension;

   struct {} PostHandshakeAuth;

   struct {
       Extension extensions<0..2^16-1>;
   } EncryptedExtensions;

   struct {
       opaque certificate_request_context<0..2^8-1>;
       Extension extensions<2..2^16-1>;
   } CertificateRequest;

B.3.3.Authentication Messages

   /* Managed by IANA */
   enum {
       X509(0),
       OpenPGP_RESERVED(1),
       RawPublicKey(2),
       (255)
   } CertificateType;

   struct {
       select (certificate_type) {
           case RawPublicKey:
             /* From RFC 7250 ASN.1_subjectPublicKeyInfo */
             opaque ASN1_subjectPublicKeyInfo<1..2^24-1>;

           case X509:
             opaque cert_data<1..2^24-1>;
       };
       Extension extensions<0..2^16-1>;
   } CertificateEntry;

   struct {
       opaque certificate_request_context<0..2^8-1>;
       CertificateEntry certificate_list<0..2^24-1>;
   } Certificate;

   struct {
       SignatureScheme algorithm;
       opaque signature<0..2^16-1>;
   } CertificateVerify;

   struct {
       opaque verify_data[Hash.length];
   } Finished;

B.3.4.Ticket Establishment

   struct {
       uint32 ticket_lifetime;
       uint32 ticket_age_add;
       opaque ticket_nonce<0..255>;
       opaque ticket<1..2^16-1>;
       Extension extensions<0..2^16-2>;
   } NewSessionTicket;

B.3.5.Updating Keys

   struct {} EndOfEarlyData;

   enum {
       update_not_requested(0), update_requested(1), (255)
   } KeyUpdateRequest;

   struct {
       KeyUpdateRequest request_update;
   } KeyUpdate;

A symmetric cipher suite defines the pair of the AEAD algorithm and hash algorithm to be used with HKDF. Cipher suite names follow the naming convention:

   CipherSuite TLS_AEAD_HASH = VALUE;
ComponentContents
TLSThe string “TLS”
AEADThe AEAD algorithm used for record protection
HASHThe hash algorithm used with HKDF
VALUEThe two byte ID assigned for this cipher suite

This specification defines the following cipher suites for use with TLS 1.3.

DescriptionValue
TLS_AES_128_GCM_SHA256{0x13,0x01}
TLS_AES_256_GCM_SHA384{0x13,0x02}
TLS_CHACHA20_POLY1305_SHA256{0x13,0x03}
TLS_AES_128_CCM_SHA256{0x13,0x04}
TLS_AES_128_CCM_8_SHA256{0x13,0x05}

The corresponding AEAD algorithms AEAD_AES_128_GCM, AEAD_AES_256_GCM, and AEAD_AES_128_CCM are defined in [RFC5116]. AEAD_CHACHA20_POLY1305 is defined in [RFC7539]. AEAD_AES_128_CCM_8 is defined in [RFC6655]. The corresponding hash algorithms are defined in [SHS].

Although TLS 1.3 uses the same cipher suite space as previous versions of TLS, TLS 1.3 cipher suites are defined differently, only specifying the symmetric ciphers, and cannot be used for TLS 1.2. Similarly, TLS 1.2 and lower cipher suites cannot be used with TLS 1.3.

New cipher suite values are assigned by IANA as described in Section 11.

The TLS protocol cannot prevent many common security mistakes. This section provides several recommendations to assist implementors. [I-D.ietf-tls-tls13-vectors] provides test vectors for TLS 1.3 handshakes.

TLS requires a cryptographically secure pseudorandom number generator (CSPRNG). In most cases, the operating system provides an appropriate facility such as /dev/urandom, which should be used absent other (performance) concerns. It is RECOMMENDED to use an existing CSPRNG implementation in preference to crafting a new one. Many adequate cryptographic libraries are already available under favorable license terms. Should those prove unsatisfactory, [RFC4086] provides guidance on the generation of random values.

TLS uses random values both in public protocol fields such as the public Random values in the ClientHello and ServerHello and to generate keying material. With a properly functioning CSPRNG, this does not present a security problem as it is not feasible to determine the CSPRNG state from its output. However, with a broken CSPRNG, it may be possible for an attacker to use the public output to determine the CSPRNG internal state and thereby predict the keying material, as documented in [CHECKOWAY]. Implementations can provide extra security against this form of attack by using separate CSPRNGs to generate public and private values.

Implementations are responsible for verifying the integrity of certificates and should generally support certificate revocation messages. Absent a specific indication from an application profile, Certificates should always be verified to ensure proper signing by a trusted Certificate Authority (CA). The selection and addition of trust anchors should be done very carefully. Users should be able to view information about the certificate and trust anchor. Applications SHOULD also enforce minimum and maximum key sizes. For example, certification paths containing keys or signatures weaker than 2048-bit RSA or 224-bit ECDSA are not appropriate for secure applications.

Implementation experience has shown that certain parts of earlier TLS specifications are not easy to understand and have been a source of interoperability and security problems. Many of these areas have been clarified in this document but this appendix contains a short list of the most important things that require special attention from implementors.

TLS protocol issues:

  • Do you correctly handle handshake messages that are fragmented to multiple TLS records (see Section 5.1)? Including corner cases like a ClientHello that is split to several small fragments? Do you fragment handshake messages that exceed the maximum fragment size? In particular, the Certificate and CertificateRequest handshake messages can be large enough to require fragmentation.
  • Do you ignore the TLS record layer version number in all unencrypted TLS records? (see Appendix D)
  • Have you ensured that all support for SSL, RC4, EXPORT ciphers, and MD5 (via the “signature_algorithms” extension) is completely removed from all possible configurations that support TLS 1.3 or later, and that attempts to use these obsolete capabilities fail correctly? (see Appendix D)
  • Do you handle TLS extensions in ClientHello correctly, including unknown extensions?
  • When the server has requested a client certificate, but no suitable certificate is available, do you correctly send an empty Certificate message, instead of omitting the whole message (see Section 4.4.2.3)?
  • When processing the plaintext fragment produced by AEAD-Decrypt and scanning from the end for the ContentType, do you avoid scanning past the start of the cleartext in the event that the peer has sent a malformed plaintext of all-zeros?
  • Do you properly ignore unrecognized cipher suites (Section 4.1.2), hello extensions (Section 4.2), named groups (Section 4.2.7), key shares (Section 4.2.8), supported versions (Section 4.2.1), and signature algorithms (Section 4.2.3) in the ClientHello?
  • As a server, do you send a HelloRetryRequest to clients which support a compatible (EC)DHE group but do not predict it in the “key_share” extension? As a client, do you correctly handle a HelloRetryRequest from the server?

Cryptographic details:

  • What countermeasures do you use to prevent timing attacks [TIMING]?
  • When using Diffie-Hellman key exchange, do you correctly preserve leading zero bytes in the negotiated key (see Section 7.4.1)?
  • Does your TLS client check that the Diffie-Hellman parameters sent by the server are acceptable, (see Section 4.2.8.1)?
  • Do you use a strong and, most importantly, properly seeded random number generator (see Appendix C.1) when generating Diffie-Hellman private values, the ECDSA “k” parameter, and other security-critical values? It is RECOMMENDED that implementations implement “deterministic ECDSA” as specified in [RFC6979].
  • Do you zero-pad Diffie-Hellman public key values to the group size (see Section 4.2.8.1)?
  • Do you verify signatures after making them to protect against RSA-CRT key leaks? [FW15]

Clients SHOULD NOT reuse a ticket for multiple connections. Reuse of a ticket allows passive observers to correlate different connections. Servers that issue tickets SHOULD offer at least as many tickets as the number of connections that a client might use; for example, a web browser using HTTP/1.1 [RFC7230] might open six connections to a server. Servers SHOULD issue new tickets with every connection. This ensures that clients are always able to use a new ticket when creating a new connection.

Previous versions of TLS offered explicitly unauthenticated cipher suites based on anonymous Diffie-Hellman. These modes have been deprecated in TLS 1.3. However, it is still possible to negotiate parameters that do not provide verifiable server authentication by several methods, including:

  • Raw public keys [RFC7250].
  • Using a public key contained in a certificate but without validation of the certificate chain or any of its contents.

Either technique used alone is vulnerable to man-in-the-middle attacks and therefore unsafe for general use. However, it is also possible to bind such connections to an external authentication mechanism via out-of-band validation of the server’s public key, trust on first use, or a mechanism such as channel bindings (though the channel bindings described in [RFC5929] are not defined for TLS 1.3). If no such mechanism is used, then the connection has no protection against active man-in-the-middle attack; applications MUST NOT use TLS in such a way absent explicit configuration or a specific application profile.

The TLS protocol provides a built-in mechanism for version negotiation between endpoints potentially supporting different versions of TLS.

TLS 1.x and SSL 3.0 use compatible ClientHello messages. Servers can also handle clients trying to use future versions of TLS as long as the ClientHello format remains compatible and there is at least one protocol version supported by both the client and the server.

Prior versions of TLS used the record layer version number (TLSPlaintext.legacy_record_version and TLSCiphertext.legacy_record_version) for various purposes. As of TLS 1.3, this field is deprecated. The value of TLSPlaintext.legacy_record_version MUST be ignored by all implementations. The value of TLSCiphertext.legacy_record_version is included in the additional data for deprotection but MAY otherwise be ignored or MAY be validated to match the fixed constant value. Version negotiation is performed using only the handshake versions (ClientHello.legacy_version, ServerHello.legacy_version, as well as the ClientHello, HelloRetryRequest and ServerHello “supported_versions” extensions). In order to maximize interoperability with older endpoints, implementations that negotiate the use of TLS 1.0-1.2 SHOULD set the record layer version number to the negotiated version for the ServerHello and all records thereafter.

For maximum compatibility with previously non-standard behavior and misconfigured deployments, all implementations SHOULD support validation of certification paths based on the expectations in this document, even when handling prior TLS versions’ handshakes. (see Section 4.4.2.2)

TLS 1.2 and prior supported an “Extended Master Secret” [RFC7627] extension which digested large parts of the handshake transcript into the master secret. Because TLS 1.3 always hashes in the transcript up to the server CertificateVerify, implementations which support both TLS 1.3 and earlier versions SHOULD indicate the use of the Extended Master Secret extension in their APIs whenever TLS 1.3 is used.

A TLS 1.3 client who wishes to negotiate with servers that do not support TLS 1.3 will send a normal TLS 1.3 ClientHello containing 0x0303 (TLS 1.2) in ClientHello.legacy_version but with the correct version(s) in the “supported_versions” extension. If the server does not support TLS 1.3 it will respond with a ServerHello containing an older version number. If the client agrees to use this version, the negotiation will proceed as appropriate for the negotiated protocol. A client using a ticket for resumption SHOULD initiate the connection using the version that was previously negotiated.

Note that 0-RTT data is not compatible with older servers and SHOULD NOT be sent absent knowledge that the server supports TLS 1.3. See Appendix D.3.

If the version chosen by the server is not supported by the client (or not acceptable), the client MUST abort the handshake with a “protocol_version” alert.

Some legacy server implementations are known to not implement the TLS specification properly and might abort connections upon encountering TLS extensions or versions which they are not aware of. Interoperability with buggy servers is a complex topic beyond the scope of this document. Multiple connection attempts may be required in order to negotiate a backwards compatible connection; however, this practice is vulnerable to downgrade attacks and is NOT RECOMMENDED.

A TLS server can also receive a ClientHello indicating a version number smaller than its highest supported version. If the “supported_versions” extension is present, the server MUST negotiate using that extension as described in Section 4.2.1. If the “supported_versions” extension is not present, the server MUST negotiate the minimum of ClientHello.legacy_version and TLS 1.2. For example, if the server supports TLS 1.0, 1.1, and 1.2, and legacy_version is TLS 1.0, the server will proceed with a TLS 1.0 ServerHello. If the “supported_versions” extension is absent and the server only supports versions greater than ClientHello.legacy_version, the server MUST abort the handshake with a “protocol_version” alert.

Note that earlier versions of TLS did not clearly specify the record layer version number value in all cases (TLSPlaintext.legacy_record_version). Servers will receive various TLS 1.x versions in this field, but its value MUST always be ignored.

0-RTT data is not compatible with older servers. An older server will respond to the ClientHello with an older ServerHello, but it will not correctly skip the 0-RTT data and will fail to complete the handshake. This can cause issues when a client attempts to use 0-RTT, particularly against multi-server deployments. For example, a deployment could deploy TLS 1.3 gradually with some servers implementing TLS 1.3 and some implementing TLS 1.2, or a TLS 1.3 deployment could be downgraded to TLS 1.2.

A client that attempts to send 0-RTT data MUST fail a connection if it receives a ServerHello with TLS 1.2 or older. A client that attempts to repair this error SHOULD NOT send a TLS 1.2 ClientHello, but instead send a TLS 1.3 ClientHello without 0-RTT data.

To avoid this error condition, multi-server deployments SHOULD ensure a uniform and stable deployment of TLS 1.3 without 0-RTT prior to enabling 0-RTT.

Field measurements [Ben17a], [Ben17b], [Res17a], [Res17b] have found that a significant number of middleboxes misbehave when a TLS client/server pair negotiates TLS 1.3. Implementations can increase the chance of making connections through those middleboxes by making the TLS 1.3 handshake look more like a TLS 1.2 handshake:

  • The client always provides a non-empty session ID in the ClientHello, as described in the legacy_session_id section of Section 4.1.2.
  • If not offering early data, the client sends a dummy change_cipher_spec record (see the third paragraph of Section 5.1) immediately before its second flight. This may either be before its second ClientHello or before its encrypted handshake flight. If offering early data, the record is placed immediately after the first ClientHello.
  • The server sends a dummy change_cipher_spec record immediately after its first handshake message. This may either be after a ServerHello or a HelloRetryRequest.

When put together, these changes make the TLS 1.3 handshake resemble TLS 1.2 session resumption, which improves the chance of successfully connecting through middleboxes. This “compatibility mode” is partially negotiated: The client can opt to provide a session ID or not and the server has to echo it. Either side can send change_cipher_spec at any time during the handshake, as they must be ignored by the peer, but if the client sends a non-empty session ID, the server MUST send the change_cipher_spec as described in this section.

Implementations negotiating use of older versions of TLS SHOULD prefer forward secret and AEAD cipher suites, when available.

The security of RC4 cipher suites is considered insufficient for the reasons cited in [RFC7465]. Implementations MUST NOT offer or negotiate RC4 cipher suites for any version of TLS for any reason.

Old versions of TLS permitted the use of very low strength ciphers. Ciphers with a strength less than 112 bits MUST NOT be offered or negotiated for any version of TLS for any reason.

The security of SSL 3.0 [SSL3] is considered insufficient for the reasons enumerated in [RFC7568], and it MUST NOT be negotiated for any reason.

The security of SSL 2.0 [SSL2] is considered insufficient for the reasons enumerated in [RFC6176], and it MUST NOT be negotiated for any reason.

Implementations MUST NOT send an SSL version 2.0 compatible CLIENT-HELLO. Implementations MUST NOT negotiate TLS 1.3 or later using an SSL version 2.0 compatible CLIENT-HELLO. Implementations are NOT RECOMMENDED to accept an SSL version 2.0 compatible CLIENT-HELLO in order to negotiate older versions of TLS.

Implementations MUST NOT send a ClientHello.legacy_version or ServerHello.legacy_version set to 0x0300 or less. Any endpoint receiving a Hello message with ClientHello.legacy_version or ServerHello.legacy_version set to 0x0300 MUST abort the handshake with a “protocol_version” alert.

Implementations MUST NOT send any records with a version less than 0x0300. Implementations SHOULD NOT accept any records with a version less than 0x0300 (but may inadvertently do so if the record version number is ignored completely).

Implementations MUST NOT use the Truncated HMAC extension, defined in Section 7 of [RFC6066], as it is not applicable to AEAD algorithms and has been shown to be insecure in some scenarios.

A complete security analysis of TLS is outside the scope of this document. In this section, we provide an informal description the desired properties as well as references to more detailed work in the research literature which provides more formal definitions.

We cover properties of the handshake separately from those of the record layer.

The TLS handshake is an Authenticated Key Exchange (AKE) protocol which is intended to provide both one-way authenticated (server-only) and mutually authenticated (client and server) functionality. At the completion of the handshake, each side outputs its view of the following values:

  • A set of “session keys” (the various secrets derived from the master secret) from which can be derived a set of working keys.
  • A set of cryptographic parameters (algorithms, etc.)
  • The identities of the communicating parties.

We assume the attacker to be an active network attacker, which means it has complete control over the network used to communicate between the parties [RFC3552]. Even under these conditions, the handshake should provide the properties listed below. Note that these properties are not necessarily independent, but reflect the protocol consumers’ needs.

Establishing the same session keys.
The handshake needs to output the same set of session keys on both sides of the handshake, provided that it completes successfully on each endpoint (See [CK01]; defn 1, part 1).
Secrecy of the session keys.
The shared session keys should be known only to the communicating parties and not to the attacker (See [CK01]; defn 1, part 2). Note that in a unilaterally authenticated connection, the attacker can establish its own session keys with the server, but those session keys are distinct from those established by the client.
Peer Authentication.
The client’s view of the peer identity should reflect the server’s identity. If the client is authenticated, the server’s view of the peer identity should match the client’s identity.
Uniqueness of the session keys:
Any two distinct handshakes should produce distinct, unrelated session keys. Individual session keys produced by a handshake should also be distinct and independent.
Downgrade protection.
The cryptographic parameters should be the same on both sides and should be the same as if the peers had been communicating in the absence of an attack (See [BBFKZG16]; defns 8 and 9}).
Forward secret with respect to long-term keys
If the long-term keying material (in this case the signature keys in certificate-based authentication modes or the external/resumption PSK in PSK with (EC)DHE modes) is compromised after the handshake is complete, this does not compromise the security of the session key (See [DOW92]), as long as the session key itself has been erased. The forward secrecy property is not satisfied when PSK is used in the “psk_ke” PskKeyExchangeMode.
Key Compromise Impersonation (KCI) resistance
In a mutually-authenticated connection with certificates, compromising the long-term secret of one actor should not break that actor’s authentication of their peer in the given connection (see [HGFS15]). For example, if a client’s signature key is compromised, it should not be possible to impersonate arbitrary servers to that client in subsequent handshakes.
Protection of endpoint identities.
The server’s identity (certificate) should be protected against passive attackers. The client’s identity should be protected against both passive and active attackers.

Informally, the signature-based modes of TLS 1.3 provide for the establishment of a unique, secret, shared key established by an (EC)DHE key exchange and authenticated by the server’s signature over the handshake transcript, as well as tied to the server’s identity by a MAC. If the client is authenticated by a certificate, it also signs over the handshake transcript and provides a MAC tied to both identities. [SIGMA] describes the design and analysis of this type of key exchange protocol. If fresh (EC)DHE keys are used for each connection, then the output keys are forward secret.

The external PSK and resumption PSK bootstrap from a long-term shared secret into a unique per-connection set of short-term session keys. This secret may have been established in a previous handshake. If PSK with (EC)DHE key establishment is used, these session keys will also be forward secret. The resumption PSK has been designed so that the resumption master secret computed by connection N and needed to form connection N+1 is separate from the traffic keys used by connection N, thus providing forward secrecy between the connections. In addition, if multiple tickets are established on the same connection, they are associated with different keys, so compromise of the PSK associated with one ticket does not lead to the compromise of connections established with PSKs associated with other tickets. This property is most interesting if tickets are stored in a database (and so can be deleted) rather than if they are self-encrypted.

The PSK binder value forms a binding between a PSK and the current handshake, as well as between the session where the PSK was established and the current session. This binding transitively includes the original handshake transcript, because that transcript is digested into the values which produce the Resumption Master Secret. This requires that both the KDF used to produce the resumption master secret and the MAC used to compute the binder be collision resistant. See Appendix E.1.1 for more on this. Note: The binder does not cover the binder values from other PSKs, though they are included in the Finished MAC.

Note: TLS does not currently permit the server to send a certificate_request message in non-certificate-based handshakes (e.g., PSK). If this restriction were to be relaxed in future, the client’s signature would not cover the server’s certificate directly. However, if the PSK was established through a NewSessionTicket, the client’s signature would transitively cover the server’s certificate through the PSK binder. [PSK-FINISHED] describes a concrete attack on constructions that do not bind to the server’s certificate (see also [Kraw16]). It is unsafe to use certificate-based client authentication when the client might potentially share the same PSK/key-id pair with two different endpoints. Implementations MUST NOT combine external PSKs with certificate-based authentication of either the client or the server unless negotiated by some extension.

If an exporter is used, then it produces values which are unique and secret (because they are generated from a unique session key). Exporters computed with different labels and contexts are computationally independent, so it is not feasible to compute one from another or the session secret from the exported value. Note: exporters can produce arbitrary-length values. If exporters are to be used as channel bindings, the exported value MUST be large enough to provide collision resistance. The exporters provided in TLS 1.3 are derived from the same handshake contexts as the early traffic keys and the application traffic keys respectively, and thus have similar security properties. Note that they do not include the client’s certificate; future applications which wish to bind to the client’s certificate may need to define a new exporter that includes the full handshake transcript.

For all handshake modes, the Finished MAC (and where present, the signature), prevents downgrade attacks. In addition, the use of certain bytes in the random nonces as described in Section 4.1.3 allows the detection of downgrade to previous TLS versions. See [BBFKZG16] for more detail on TLS 1.3 and downgrade.

As soon as the client and the server have exchanged enough information to establish shared keys, the remainder of the handshake is encrypted, thus providing protection against passive attackers, even if the computed shared key is not authenticated. Because the server authenticates before the client, the client can ensure that if it authenticates to the server, it only reveals its identity to an authenticated server. Note that implementations must use the provided record padding mechanism during the handshake to avoid leaking information about the identities due to length. The client’s proposed PSK identities are not encrypted, nor is the one that the server selects.

E.1.1.Key Derivation and HKDF

Key derivation in TLS 1.3 uses the HKDF function defined in [RFC5869] and its two components, HKDF-Extract and HKDF-Expand. The full rationale for the HKDF construction can be found in [Kraw10] and the rationale for the way it is used in TLS 1.3 in [KW16]. Throughout this document, each application of HKDF-Extract is followed by one or more invocations of HKDF-Expand. This ordering should always be followed (including in future revisions of this document), in particular, one SHOULD NOT use an output of HKDF-Extract as an input to another application of HKDF-Extract without an HKDF-Expand in between. Consecutive applications of HKDF-Expand are allowed as long as these are differentiated via the key and/or the labels.

Note that HKDF-Expand implements a pseudorandom function (PRF) with both inputs and outputs of variable length. In some of the uses of HKDF in this document (e.g., for generating exporters and the resumption_master_secret), it is necessary that the application of HKDF-Expand be collision-resistant, namely, it should be infeasible to find two different inputs to HKDF-Expand that output the same value. This requires the underlying hash function to be collision resistant and the output length from HKDF-Expand to be of size at least 256 bits (or as much as needed for the hash function to prevent finding collisions).

E.1.2.Client Authentication

A client that has sent authentication data to a server, either during the handshake or in post-handshake authentication, cannot be sure if the server afterwards considers the client to be authenticated or not. If the client needs to determine if the server considers the connection to be unilaterally or mutually authenticated, this has to be provisioned by the application layer. See [CHHSV17] for details. In addition, the analysis of post-handshake authentication from [Kraw16] shows that the client identified by the certificate sent in the post-handshake phase possesses the traffic key. This party is therefore the client that participated in the original handshake or one to whom the original client delegated the traffic key (assuming that the traffic key has not been compromised).

E.1.3.0-RTT

The 0-RTT mode of operation generally provides similar security properties as 1-RTT data, with the two exceptions that the 0-RTT encryption keys do not provide full forward secrecy and that the server is not able to guarantee uniqueness of the handshake (non-replayability) without keeping potentially undue amounts of state. See Section 8 for mechanisms to limit the exposure to replay.

E.1.4.Exporter Independence

The exporter_master_secret and early_exporter_master_secret are derived to be independent of the traffic keys and therefore do not represent a threat to the security of traffic encrypted with those keys. However, because these secrets can be used to compute any exporter value, they SHOULD be erased as soon as possible. If the total set of exporter labels is known, then implementations SHOULD pre-compute the inner Derive-Secret stage of the exporter computation for all those labels, then erase the [early_]exporter_master_secret, followed by each inner values as soon as it is known that it will not be needed again.

E.1.5.Post-Compromise Security

TLS does not provide security for handshakes which take place after the peer’s long-term secret (signature key or external PSK) is compromised. It therefore does not provide post-compromise security [CCG16], sometimes also referred to as backwards or future secrecy. This is in contrast to KCI resistance, which describes the security guarantees that a party has after its own long-term secret has been compromised.

E.1.6.External References

The reader should refer to the following references for analysis of the TLS handshake: [DFGS15][CHSV16][DFGS16][KW16][Kraw16][FGSW16][LXZFH16][FG17][BBK17].

The record layer depends on the handshake producing strong traffic secrets which can be used to derive bidirectional encryption keys and nonces. Assuming that is true, and the keys are used for no more data than indicated in Section 5.5 then the record layer should provide the following guarantees:

Confidentiality.
An attacker should not be able to determine the plaintext contents of a given record.
Integrity.
An attacker should not be able to craft a new record which is different from an existing record which will be accepted by the receiver.
Order protection/non-replayability
An attacker should not be able to cause the receiver to accept a record which it has already accepted or cause the receiver to accept record N+1 without having first processed record N.
Length concealment.
Given a record with a given external length, the attacker should not be able to determine the amount of the record that is content versus padding.
Forward secrecy after key change.
If the traffic key update mechanism described in Section 4.6.3 has been used and the previous generation key is deleted, an attacker who compromises the endpoint should not be able to decrypt traffic encrypted with the old key.

Informally, TLS 1.3 provides these properties by AEAD-protecting the plaintext with a strong key. AEAD encryption [RFC5116] provides confidentiality and integrity for the data. Non-replayability is provided by using a separate nonce for each record, with the nonce being derived from the record sequence number (Section 5.3), with the sequence number being maintained independently at both sides thus records which are delivered out of order result in AEAD deprotection failures. In order to prevent mass cryptanalysis when the same plaintext is repeatedly encrypted by different users under the same key (as is commonly the case for HTTP), the nonce is formed by mixing the sequence number with a secret per-connection initialization vector derived along with the traffic keys. See [BT16] for analysis of this construction.

The re-keying technique in TLS 1.3 (see Section 7.2) follows the construction of the serial generator in [REKEY], which shows that re-keying can allow keys to be used for a larger number of encryptions than without re-keying. This relies on the security of the HKDF-Expand-Label function as a pseudorandom function (PRF). In addition, as long as this function is truly one way, it is not possible to compute traffic keys from prior to a key change (forward secrecy).

TLS does not provide security for data which is communicated on a connection after a traffic secret of that connection is compromised. That is, TLS does not provide post-compromise security/future secrecy/backward secrecy with respect to the traffic secret. Indeed, an attacker who learns a traffic secret can compute all future traffic secrets on that connection. Systems which want such guarantees need to do a fresh handshake and establish a new connection with an (EC)DHE exchange.

E.2.1.External References

The reader should refer to the following references for analysis of the TLS record layer: [BMMT15][BT16][BDFKPPRSZZ16][BBK17][Anon18].

TLS is susceptible to a variety of traffic analysis attacks based on observing the length and timing of encrypted packets [CLINIC][HCJ16]. This is particularly easy when there is a small set of possible messages to be distinguished, such as for a video server hosting a fixed corpus of content, but still provides usable information even in more complicated scenarios.

TLS does not provide any specific defenses against this form of attack but does include a padding mechanism for use by applications: The plaintext protected by the AEAD function consists of content plus variable-length padding, which allows the application to produce arbitrary length encrypted records as well as padding-only cover traffic to conceal the difference between periods of transmission and periods of silence. Because the padding is encrypted alongside the actual content, an attacker cannot directly determine the length of the padding, but may be able to measure it indirectly by the use of timing channels exposed during record processing (i.e., seeing how long it takes to process a record or trickling in records to see which ones elicit a response from the server). In general, it is not known how to remove all of these channels because even a constant time padding removal function will likely feed the content into data-dependent functions. At minimum, a fully constant time server or client would require close cooperation with the application layer protocol implementation, including making that higher level protocol constant time.

Note: Robust traffic analysis defences will likely lead to inferior performance due to delay in transmitting packets and increased traffic volume.

In general, TLS does not have specific defenses against side-channel attacks (i.e., those which attack the communications via secondary channels such as timing) leaving those to the implementation of the relevant cryptographic primitives. However, certain features of TLS are designed to make it easier to write side-channel resistant code:

  • Unlike previous versions of TLS which used a composite MAC-then-encrypt structure, TLS 1.3 only uses AEAD algorithms, allowing implementations to use self-contained constant-time implementations of those primitives.
  • TLS uses a uniform “bad_record_mac” alert for all decryption errors, which is intended to prevent an attacker from gaining piecewise insight into portions of the message. Additional resistance is provided by terminating the connection on such errors; a new connection will have different cryptographic material, preventing attacks against the cryptographic primitives that require multiple trials.

Information leakage through side channels can occur at layers above TLS, in application protocols and the applications that use them. Resistance to side-channel attacks depends on applications and application protocols separately ensuring that confidential information is not inadvertently leaked.

Replayable 0-RTT data presents a number of security threats to TLS-using applications, unless those applications are specifically engineered to be safe under replay (minimally, this means idempotent, but in many cases may also require other stronger conditions, such as constant-time response). Potential attacks include:

  • Duplication of actions which cause side effects (e.g., purchasing an item or transferring money) to be duplicated, thus harming the site or the user.
  • Attackers can store and replay 0-RTT messages in order to re-order them with respect to other messages (e.g., moving a delete to after a create).
  • Exploiting cache timing behavior to discover the content of 0-RTT messages by replaying a 0-RTT message to a different cache node and then using a separate connection to measure request latency, to see if the two requests address the same resource.

If data can be replayed a large number of times, additional attacks become possible, such as making repeated measurements of the speed of cryptographic operations. In addition, they may be able to overload rate-limiting systems. For further description of these attacks, see [Mac17].

Ultimately, servers have the responsibility to protect themselves against attacks employing 0-RTT data replication. The mechanisms described in Section 8 are intended to prevent replay at the TLS layer but do not provide complete protection against receiving multiple copies of client data. TLS 1.3 falls back to the 1-RTT handshake when the server does not have any information about the client, e.g., because it is in a different cluster which does not share state or because the ticket has been deleted as described in Section 8.1. If the application layer protocol retransmits data in this setting, then it is possible for an attacker to induce message duplication by sending the ClientHello to both the original cluster (which processes the data immediately) and another cluster which will fall back to 1-RTT and process the data upon application layer replay. The scale of this attack is limited by the client’s willingness to retry transactions and therefore only allows a limited amount of duplication, with each copy appearing as a new connection at the server.

If implemented correctly, the mechanisms described in Section 8.1 and Section 8.2 prevent a replayed ClientHello and its associated 0-RTT data from being accepted multiple times by any cluster with consistent state; for servers which limit the use of 0-RTT to one cluster for a single ticket, then a given ClientHello and its associated 0-RTT data will only be accepted once. However, if state is not completely consistent, then an attacker might be able to have multiple copies of the data be accepted during the replication window. Because clients do not know the exact details of server behavior, they MUST NOT send messages in early data which are not safe to have replayed and which they would not be willing to retry across multiple 1-RTT connections.

Application protocols MUST NOT use 0-RTT data without a profile that defines its use. That profile needs to identify which messages or interactions are safe to use with 0-RTT and how to handle the situation when the server rejects 0-RTT and falls back to 1-RTT.

In addition, to avoid accidental misuse, TLS implementations MUST NOT enable 0-RTT (either sending or accepting) unless specifically requested by the application and MUST NOT automatically resend 0-RTT data if it is rejected by the server unless instructed by the application. Server-side applications may wish to implement special processing for 0-RTT data for some kinds of application traffic (e.g., abort the connection, request that data be resent at the application layer, or delay processing until the handshake completes). In order to allow applications to implement this kind of processing, TLS implementations MUST provide a way for the application to determine if the handshake has completed.

E.5.1.Replay and Exporters

Replays of the ClientHello produce the same early exporter, thus requiring additional care by applications which use these exporters. In particular, if these exporters are used as an authentication channel binding (e.g., by signing the output of the exporter) an attacker who compromises the PSK can transplant authenticators between connections without compromising the authentication key.

In addition, the early exporter SHOULD NOT be used to generate server-to-client encryption keys because that would entail the reuse of those keys. This parallels the use of the early application traffic keys only in the client-to-server direction.

Because implementations respond to an invalid PSK binder by aborting the handshake, it may be possible for an attacker to verify whether a given PSK identity is valid. Specifically, if a server accepts both external PSK and certificate-based handshakes, a valid PSK identity will result in a failed handshake, whereas an invalid identity will just be skipped and result in a successful certificate handshake. Servers which solely support PSK handshakes may be able to resist this form of attack by treating the cases where there is no valid PSK identity and where there is an identity but it has an invalid binder identically.

Although TLS 1.3 does not use RSA key transport and so is not directly susceptible to Bleichenbacher-type attacks, if TLS 1.3 servers also support static RSA in the context of previous versions of TLS, then it may be possible to impersonate the server for TLS 1.3 connections [JSS15]. TLS 1.3 implementations can prevent this attack by disabling support for static RSA across all versions of TLS. In principle, implementations might also be able to separate certificates with different keyUsage bits for static RSA decryption and RSA signature, but this technique relies on clients refusing to accept signatures using keys in certificates that do not have the digitalSignature bit set, and many clients do not enforce this restriction.

The discussion list for the IETF TLS working group is located at the e-mail address tls@ietf.org. Information on the group and information on how to subscribe to the list is at https://www.ietf.org/mailman/listinfo/tls

Archives of the list can be found at: https://www.ietf.org/mail-archive/web/tls/current/index.html

  • Martin Abadi
    University of California, Santa Cruz
    abadi@cs.ucsc.edu
  • Christopher Allen (co-editor of TLS 1.0)
    Alacrity Ventures
    ChristopherA@AlacrityManagement.com
  • Richard Barnes
    Cisco
    rlb@ipv.sx
  • Steven M. Bellovin
    Columbia University
    smb@cs.columbia.edu
  • David Benjamin
    Google
    davidben@google.com
  • Benjamin Beurdouche
    INRIA & Microsoft Research
    benjamin.beurdouche@ens.fr
  • Karthikeyan Bhargavan (co-author of [RFC7627])
    INRIA
    karthikeyan.bhargavan@inria.fr
  • Simon Blake-Wilson (co-author of [RFC4492])
    BCI
    sblakewilson@bcisse.com
  • Nelson Bolyard (co-author of [RFC4492])
    Sun Microsystems, Inc.
    nelson@bolyard.com
  • Ran Canetti
    IBM
    canetti@watson.ibm.com
  • Matt Caswell
    OpenSSL
    matt@openssl.org
  • Stephen Checkoway
    University of Illinois at Chicago
    sfc@uic.edu
  • Pete Chown
    Skygate Technology Ltd
    pc@skygate.co.uk
  • Katriel Cohn-Gordon
    University of Oxford
    me@katriel.co.uk
  • Cas Cremers
    University of Oxford
    cas.cremers@cs.ox.ac.uk
  • Antoine Delignat-Lavaud (co-author of [RFC7627])
    INRIA
    antdl@microsoft.com
  • Tim Dierks (co-editor of TLS 1.0, 1.1, and 1.2)
    Independent
    tim@dierks.org
  • Roelof DuToit
    Symantec Corporation
    roelof_dutoit@symantec.com
  • Taher Elgamal
    Securify
    taher@securify.com
  • Pasi Eronen
    Nokia
    pasi.eronen@nokia.com
  • Cedric Fournet
    Microsoft
    fournet@microsoft.com
  • Anil Gangolli
    anil@busybuddha.org
  • David M. Garrett
    dave@nulldereference.com
  • Illya Gerasymchuk
    Independent
    illya@iluxonchik.me
  • Alessandro Ghedini
    Cloudflare Inc.
    alessandro@cloudflare.com
  • Daniel Kahn Gillmor
    ACLU
    dkg@fifthhorseman.net
  • Matthew Green
    Johns Hopkins University
    mgreen@cs.jhu.edu
  • Jens Guballa
    ETAS
    jens.guballa@etas.com
  • Felix Guenther
    TU Darmstadt
    mail@felixguenther.info
  • Vipul Gupta (co-author of [RFC4492])
    Sun Microsystems Laboratories
    vipul.gupta@sun.com
  • Chris Hawk (co-author of [RFC4492])
    Corriente Networks LLC
    chris@corriente.net
  • Kipp Hickman
  • Alfred Hoenes
  • David Hopwood
    Independent Consultant
    david.hopwood@blueyonder.co.uk
  • Marko Horvat
    MPI-SWS
    mhorvat@mpi-sws.org
  • Jonathan Hoyland
    Royal Holloway, University of London jonathan.hoyland@gmail.com
  • Subodh Iyengar
    Facebook
    subodh@fb.com
  • Benjamin Kaduk
    Akamai
    kaduk@mit.edu
  • Hubert Kario
    Red Hat Inc.
    hkario@redhat.com
  • Phil Karlton (co-author of SSL 3.0)
  • Leon Klingele
    Independent
    mail@leonklingele.de
  • Paul Kocher (co-author of SSL 3.0)
    Cryptography Research
    paul@cryptography.com
  • Hugo Krawczyk
    IBM
    hugokraw@us.ibm.com
  • Adam Langley (co-author of [RFC7627])
    Google
    agl@google.com
  • Olivier Levillain
    ANSSI
    olivier.levillain@ssi.gouv.fr
  • Xiaoyin Liu
    University of North Carolina at Chapel Hill
    xiaoyin.l@outlook.com
  • Ilari Liusvaara
    Independent
    ilariliusvaara@welho.com
  • Atul Luykx
    K.U. Leuven
    atul.luykx@kuleuven.be
  • Colm MacCarthaigh
    Amazon Web Services
    colm@allcosts.net
  • Carl Mehner
    USAA
    carl.mehner@usaa.com
  • Jan Mikkelsen
    Transactionware
    janm@transactionware.com
  • Bodo Moeller (co-author of [RFC4492])
    Google
    bodo@acm.org
  • Kyle Nekritz
    Facebook
    knekritz@fb.com
  • Erik Nygren
    Akamai Technologies
    erik+ietf@nygren.org
  • Magnus Nystrom
    Microsoft
    mnystrom@microsoft.com
  • Kazuho Oku
    DeNA Co., Ltd.
    kazuhooku@gmail.com
  • Kenny Paterson
    Royal Holloway, University of London
    kenny.paterson@rhul.ac.uk
  • Alfredo Pironti (co-author of [RFC7627])
    INRIA
    alfredo.pironti@inria.fr
  • Andrei Popov
    Microsoft
    andrei.popov@microsoft.com
  • Marsh Ray (co-author of [RFC7627])
    Microsoft
    maray@microsoft.com
  • Robert Relyea
    Netscape Communications
    relyea@netscape.com
  • Kyle Rose
    Akamai Technologies
    krose@krose.org
  • Jim Roskind
    Amazon
    jroskind@amazon.com
  • Michael Sabin
  • Joe Salowey
    Tableau Software
    joe@salowey.net
  • Rich Salz
    Akamai
    rsalz@akamai.com
  • David Schinazi
    Apple Inc.
    dschinazi@apple.com
  • Sam Scott
    Royal Holloway, University of London
    me@samjs.co.uk
  • Dan Simon
    Microsoft, Inc.
    dansimon@microsoft.com
  • Brian Smith
    Independent
    brian@briansmith.org
  • Brian Sniffen
    Akamai Technologies
    ietf@bts.evenmere.org
  • Nick Sullivan
    Cloudflare Inc.
    nick@cloudflare.com
  • Bjoern Tackmann
    University of California, San Diego
    btackmann@eng.ucsd.edu
  • Tim Taubert
    Mozilla
    ttaubert@mozilla.com
  • Martin Thomson
    Mozilla
    mt@mozilla.com
  • Sean Turner
    sn3rd
    sean@sn3rd.com
  • Steven Valdez
    Google
    svaldez@google.com
  • Filippo Valsorda
    Cloudflare Inc.
    filippo@cloudflare.com
  • Thyla van der Merwe
    Royal Holloway, University of London
    tjvdmerwe@gmail.com
  • Victor Vasiliev
    Google
    vasilvv@google.com
  • Tom Weinstein
  • Hoeteck Wee
    Ecole Normale Superieure, Paris
    hoeteck@alum.mit.edu
  • David Wong
    NCC Group
    david.wong@nccgroup.trust
  • Christopher A. Wood
    Apple Inc.
    cawood@apple.com
  • Tim Wright
    Vodafone
    timothy.wright@vodafone.com
  • Peter Wu
    Independent
    peter@lekensteyn.nl
  • Kazu Yamamoto
    Internet Initiative Japan Inc.
    kazu@iij.ad.jp

Guidelines for writing readable code

$
0
0

Reading someone else's code can be quite confusing. Hours can go on issues that should have been fixed in minutes. In this article, I would like to share some advice on how to write code that will be easier to understand and maintain.

Before we get started, please take notice this is not a guide about writing "clean code". People tend to understand different things by this term, some like it to be easily extendable and generic, some prefer to abstract the implementation and provide just configuration and there are some who just like to see a subjectively beautiful code. This guide focuses on readable code, by that I mean a piece of code that communicates the necessary information to other programmers as efficiently as possible.

Below are 23 guides to help you write more readable code, this is a lengthy article so feel free to jump to parts that interests you:

1. Identify that you have a problem before creating the solution
2. Pick the right tool for the job.
3. Simplicity is king.
4. Your functions, classes, and components should have a well-defined purpose.
5. Naming is hard, but it's important.
6. Do not duplicate code.
7. Remove dead code, do not leave it commented.
8. Constant values should be in static constants or enums.
9. Prefer internal functions over custom solutions.
10. Use language specific guidelines.
11. Avoid creating multiple blocks of code nested in one another.
12. It’s not about the least number of lines.
13. Learn design patterns and when not to use them.
14. Split your classes to data holders and data manipulators.
15. Fix issues at their roots.
16. Hidden trap of abstractions.
17. Rules of the world are not the rules of your application.
18. Type your variables if you can, even if you don’t have to.
19. Write tests.
20. Use static code analysis tools.
21. Human code reviews.
22. Comments.
23. Documentation.
Conclusion.

1. Identify that you have a problem before creating the solution.


No matter if you are fixing a bug, adding a new feature or designing an application, you are essentially solving a problem for someone. Ideally, you want to do that leaving the least amount of issues behind. You should be clear on what problems you are solving with your design pattern choices, refactorings, external dependencies, databases and everything else you spend your valuable time on.

Your piece of code is a potential problem. Even the beautiful one. The only time any piece of code is no longer a problem is when a project is finished and dead - no longer supported. Why? Because someone will have to read it during the project lifetime, understand it, fix it, extend it, or even remove the feature it provides entirely.

Maintaining the codebase takes a lot of time and not many programmers like to do it because it lacks creativity. Write your code simpler so a junior developer can fix it when needed and you can be free to tackle bigger issues.

Lost time is a problem. A perfect solution to your task can be available, but it may be hard to see for a developer sometimes. There are tasks, where the best solution is to convince the client that what he wants is not really what he needs. It takes a deeper understanding of the application and its purpose. Your client might want a whole new module that will end up becoming thousands of extra lines of code when he just needs some more customization to his existing options. It might turn out you only need to change the existing codebase just a little bit, saving time and money.

There are other types of problems. Let's say you need to implement a filterable list of records. You hold your data in the database, but the connections between different records are complex. After analyzing how the client wants the data to be filtered, you discover that because of the database design you will have to spend about 20 hours building complex SQL queries with multiple joins and inner queries. Why not explain there is a different solution that will take 1 hour but will miss a part of the feature? It might turn out that the extra feature is not worth so much of your time, which translates to monetary cost.

2. Pick the right tool for the job.


This shiny silver bullet language, that one framework that you love unconditionally or a new database engine can turn out to be not the right tool for the problem you are facing. Don't pick tools that you just heard are awesome for everything in a serious project. It's a recipe for disaster. If your data need relations, picking MongoDB just to learn will end badly. You know you can do it, but often you will require a workaround that will produce extra code providing suboptimal solutions. And sure, you can hit the nail even with a wooden board, but a quick google search might point you to a hammer. Maybe since you last checked there is an AI that can automatically do it for you.

3. Simplicity is king.


You might have heard the phrase "premature optimization is the root of all evil". It holds partial truth. You should prefer simple solutions unless you are confident it will not work. Not just believing it will not work, but already tried it or calculated it beforehand and being sure it won't work. Choosing more complex solution for whatever reason, be it the speed of execution, lack of RAM, extensibility, lack of dependencies or other reasons can heavily impact code readability. Don't complicate things unless you have to. The exception to that would be if you know a more efficient solution and you know it's implementation won't impact readability or your time requirements.

On the same note, you don't need to use all the new features of your language if it doesn't profit you and your team. New doesn't mean better. If you are unsure, go back to the first point and consider what problem are you trying to solve before refactoring. Just because Javascript has way too many new ways of writing a for loop statement doesn't make for loops obsolete if you need that index variable.

4. Your functions, classes, and components should have a well-defined purpose.


Do you know SOLID principles? I found these to be pretty good for designing generic libraries, but even though I used it a couple of times and I've seen a few implementations in working projects, I think the rules are a bit too confusing and complicated.

Split your code into functions that each does one thing. For example, let's consider how we would go about implementing a button. Button could be a class that groups all functionalities of a button. You might implement the button with one function for drawing it on the screen, another function to highlight it on mouseover, yet another one to be called on clicking the button and one more to animate the button on click. You can split it even further. If you need to calculate the rectangle position of a button based on screen resolution, don't do it in the draw function. Implement it in a different class since it is usable by other GUI elements and just use it when drawing the button.

It is a simple thing to follow, whenever you think "this doesn't have to be here" you can move it to another function, providing more information to fellow developers by wrapping a block of code with name of the function and comments.

Consider examples below that do the same thing, which one informs what it does quicker?

// C++
if (currentDistance // This is sight of a player
if (!isLight) {
// If lighting of the tile is about 30% (so sight in darkness is worse) or distance from player is 1, tile should be visible.
if (hasInfravision || map.getLight(mapPosition) > 0.29f || ASEngine::vmath::distance(center, mapPosition) == 1) {
map.toggleVisible(true, mapPosition);
}
}
// This is for light calculations
else {
ASEngine::ivec3 region = World::inst().map.currentPosition;
ASEngine::ivec2 pos = mapPosition;
if (mapPosition.x > 63) {
pos.x -= 64;
region.x += 1;
}
else if (mapPosition.x pos.x += 64;
region.x -= 1;
}
if (mapPosition.y > 63) {
pos.y -= 64;
region.y += 1;
}
else if (mapPosition.y pos.y += 64;
region.y -= 1;
}
map.changeLight(pos, region, 1.0f - static_cast(currentDistance) / static_cast(radius2));
}
}
// C++
if (currentDistance // This is sight of a player
if (!isLight) {
this->markVisibleTile(hasInfravision, map, center, mapPosition);
}
// This is for light calculations
else {
ASEngine::ivec3 region = World::inst().map.currentPosition;
ASEngine::ivec2 pos = map.getRelativePosition(mapPosition, region);
map.changeLight(pos, region, 1.0f - static_cast(currentDistance) / static_cast(radius2));
}
}

5. Naming is hard, but it's important.


Names of variables and functions should be distinct and provide a general idea of what they do. The important thing about naming is that it should describe what it does to your team, so it should conform to the conventions chosen in the project. Even if you don't agree with them. If every request for a record in the database starts with "find" word, like "findUser", then your team might get confused if you come to the project and name your database function "getUserProfile" because this is what you are used to. Try to group naming when possible, for example, if you have many classes for input validation, putting "Validator" as the suffix for the name may quickly provide information what the purpose of the class is.

Choose and stick to a case type according to the standards. It gets really confusing to read camelCase, snake_case, kebab-case and beer🍻case used in different files of the same project.

6. Do not duplicate code.


We already established that the code is a problem, so why duplicate your problems to save a few minutes? It really doesn't make sense. You might be thinking you are solving something quickly by just copying and pasting, but if you think you have to copy more than 2 lines of code, try to entertain the idea that you might be missing an opportunity for a better solution. Maybe a generic function, or a loop?

7. Remove dead code, do not leave it commented.


Commented code is confusing. Did someone remove it temporarily? Is it important? When was it commented? It's dead, take it out of its misery. Just remove it. I get it that you are hesitant to remove the code because things can go bad and you want to just uncomment it. You might even be very attached to it since you spent your time and energy to come up with it. Or maybe you think it might be needed "soon". The solution to all of these issues is version control software. Just use git history to retrieve the code if you ever need it. Clean up after yourself!

8. Constant values should be in static constants or enums.


Do you use strings or integers to define types of objects? For example a user can have a role "admin" or "guest". How would you go about checking if user has a role "admin"?
if ($user->role == "admin") { 
// user is admin
}
This is not great. First of all, if the name "admin" changes you will have to change it in your whole application. You might say this rarely happens and modern IDEs make it not that hard to replace it. That's true. The other reason is lack of autocomplete and because of that misspelling issues come up. These can be pretty nasty to debug.

By defining global constants or enums depending on the language you can profit from autocomplete and change the value in a single place if you ever need to. You don't even have to remember what kind of value is hidden behind the constant, you just let your IDE autocomplete magic help.

// PHP
const ROLE_ADMIN = "admin";

if ($user->role == ROLE_ADMIN) {
// user is admin
}

// C++
enum class Role { GUEST, ADMIN }; // It's possible to map these enums to strings, but it's not needed.

if (user.role == Role.ADMIN) {
// user is admin
}

It's not just types of your objects. In PHP you can define arrays with strings as names of the fields. With complex structures it can be hard to not make a typo and for that reason it is preferable to use objects instead. Try to avoid coding with strings and you will profit from less typos and speed increase from autocomplete feature.

9. Prefer internal functions over custom solutions.


If your language or framework that you picked for your project provides you with a solution to your problem, use it. Everyone can quickly google what does a function do even if it's not used often. It will probably take more time to figure out your custom solution. If you find a piece of code that does the same thing as an internal function, just refactor quickly, don't leave it be. Removed code is no longer an issue, so deleting code is great!

10. Use language specific guidelines.


If you write in PHP, you should get to know PSRs. For Javascript, there is a decent guideline from Airbnb. For C++ there is guideline from google or core guidelines from Bjarne Stroustrup, the creator of C++. Other languages might have their own guidelines for quality code or you can even come up with your own standards for your team. The important part is to enforce the use of the chosen guideline for the project, so there is a unified vision of how it should be developed. It prevents many issues that come from different people with their own unique experiences doing what they are used to.

11. Avoid creating multiple blocks of code nested in one another.


Just compare these two blocks of code:
void ProgressEffects::progressPoison(Entity entity, std::shared_ptr<Effects> effects)
{
float currentTime = DayNightCycle::inst().getCurrentTime();
if (effects->lastPoisonTick > 0.0f && currentTime > effects->lastPoisonTick + 1.0f) {
if (effects->poison.second > currentTime) {
std::shared_ptr<Equipment> eq = nullptr;
int poisonResitance = 0;
if (this->manager.entityHasComponent(entity, ComponentType::EQUIPMENT)) {
eq = this->manager.getComponent<Equipment>(entity);
for (size_t i = 0; i < EQUIP_SLOT_NUM; i++) {
if (eq->wearing[i] != invalidEntity && this->manager.entityHasComponent(eq->wearing[i], ComponentType::ARMOR)) {
std::shared_ptr<Armor> armor = this->manager.getComponent<Armor>(eq->wearing[i]);
poisonResitance += armor->poison;
}
}
}
int damage = effects->poison.first - poisonResitance;
if (damage < 1) damage = 1;
std::shared_ptr<Health> health = this->manager.getComponent<Health>(entity);
health->health -= damage;
} else {
effects->poison.second = -1.0f;
}
}
}
void ProgressEffects::progressPoison(Entity entity, std::shared_ptr effects)
{
float currentTime = DayNightCycle::inst().getCurrentTime();
if (effects->lastPoisonTick lastPoisonTick + 1.0f) return;
if (effects->poison.second effects->poison.second = -1.0f;
return;
}

int poisonResitance = this->calculatePoisonResistance(entity);
int damage = effects->poison.first - poisonResitance;
if (damage std::shared_ptr health = this->manager.getComponent(entity);
health->health -= damage;
}


The second one is much easier to read, is it not? If such solution is available, try to avoid nesting blocks of ifs and loops in one another. A common trick is to reverse the if statement and return from the function before moving to the block of code like in the example above.

12. It’s not about the least number of lines.


We often say that a piece of code that takes fewer lines of code to accomplish the task is better. Some of us even get obsessed about how many lines of code we are adding or removing, measuring our productivity with the count. We do that for simplification, but it's not a rule that should be followed without considering readability. You can squeeze everything in a single line but chances are it will be much harder to understand what is going on than splitting it into few, simple lines with one command per line.

Some languages offer the possibility of writing short if statements, like so:

$variable == $x ? $y : $z; // if ($variable == x) { $result = $y; } else { $result = $z; }

It can be a great choice, but can also be easily overdone:
$variable == $x ? ($x == $y ? array_merge($x, $y, $z) : $x) : $y; // What is this heresy?!

It should be easier to grasp after split.
$result = $y;
if ($variable == $x && $x == $y) $result = array_merge($x, $y, $z);
else if ($variable == $x) $result = $x;

These 3 lines take more space on your screen, but it takes less time to analyze what is going on with the data.

13. Learn design patterns and when not to use them.


There are many different design patterns that are often chosen to solve coding issues. What you should keep in mind is that although these patterns solve specific issues in the application, their usefulness can be affected by many different factors like the size of the project, the number of people working on it, time (cost) constraints or required complexity of the solution. Some patterns have been named antipatterns, like the Singleton pattern because even though they provide some solutions they also introduce issues in certain cases.

Just make sure you understand the cost of implementation in terms of complexity that will be introduced before choosing a design pattern for your particular solution. You might not need the Observer pattern to communicate between components in a simple system, maybe a few booleans will produce an easy to follow solution? It is more justified to spend the time to implement a chosen design pattern in a bigger, more complex applications.

14. Split your classes to data holders and data manipulators.


A data holder class is a class that keeps some data in its internal data structures. It allows access to the data through getters and setters as required, but does not manipulate the data unless it's always changed when keeping it in the system or always has to be mutated on access.

A very good example is in the Entity Component System architectural pattern, where Components only hold the data and Systems manipulate and process it. Another use case would be the Repository design pattern implemented for communication with an external database, where a "Model" class represents data from the database in language-specific structures and "Repository" class synchronizes the data with a database, either persisting changes on the Model or fetching them.

This separation makes it much easier to understand different parts of your application. Consider the Repository example above. If you would want to display a list of data held in a collection of Models, do you need to know where this data came from? Do you need to know how in the database it's being stored and how it needs to be mapped to language-specific structures? The answer to both is no. You pull the models through existing repository methods and focus on just what you need to do in your task, which is displaying the data.

How about Entity Component System example? If you need to implement systems that will process the use of a skill, playing animation, sound, dealing damage and so on. You don't need to know how the skill was triggered. It doesn't matter if AI scripts initiated the skill on some conditions or player used a hotkey to activate it. The only thing you need is to recognize that the data in the Component was changed, indicating which ability needs to be processed.

15. Fix issues at their roots.


You need to implement a new feature, adding it to an existing codebase. You get into the part of the code you need to change and you come across an issue. The structure of your function input doesn't work well with what you need, so you have to write quite a bit of extra code to reorganize the data and pull some more of it before you can implement your solution.

Before you do that, try to move a few steps back in your codebase. Where does this data come from and how is it used? Maybe you can get it in an easier to process format from an external source or change it immediately as you acquire it? By fixing this issue in its root, you might be fixing the same issue in multiple places and in the future features or changes. Always try to simplify how you keep your data for ease of access as soon as you get it. This is especially important when you receive the data from an external source. If you need data from users of the application or external API, you should weed out unnecessary things and reorganize the rest immediately.

16. Hidden trap of abstractions.


Why do we write general purpose, abstract solutions to our problems? To easily extend our applications, make it easier to adapt to new requirements and to reuse our piece of code so we don't have to write it ever again.

There is often a heavy cost to abstraction in terms of code readability. The highest level of abstraction is when everything is solved for you while the implementation is hidden. You are given the ability to configure how your data should be processed given input, but you have no control over the details, like for example how it's going to be stored in your database, how efficiently it's going to be processed, what information is being logged and many more. The argument to such solution is that if a new source of data has to be processed the same way as the current one, it's easy to just throw it into the library and point where it should be stored. You are essentially trading control for speed of implementation.

When something goes wrong and it's not a well-documented issue, someone will have a very hard time understanding all the general purpose ideas that try to solve way more than necessary. If we can afford it, we really shouldn't want to hide implementation details. Keeping control of the codebase allows for more flexibility. Don't write a general solution to simple problems just because you think it "might" be extended in the future. It rarely is and it can be rewritten when needed.

Let's consider an example. If you can create a class that can import data from CSV file and pack it into a database in 10 - 15 lines of readable code, why bother making 2 classes and generalize the solution so it can be potentially expanded into importing from XLS or XML in the future when you don't even have a hint that this will be needed for your application? Why pull an external library of 5k lines of code you don't need to solve this issue?

There is rarely a necessity to generalize the storage place of your data. How many times in your career did you change database engine? In the last 10 years, I have come across an issue that was resolved in such a way once. Creating abstract solutions are costly and very often unnecessary unless you are creating a library that has to cater to a huge variety of projects at once.

In contrast, when you know for sure that you have to allow importing from XLS and CSV out of the box, then the general solution might be a perfectly viable choice. It's also really not a big deal if you will write a general solution later when requirements of your application change. It will be a lot easier for someone to just have a plain and simple solution when they will need to replace it.

17. Rules of the world are not the rules of your application.


I had an interesting argument about modeling "the real world" when implementing OOP paradigm in an application. Let's say we have to process big data for an advertisement system. We have 2 types of log messages. First is information about emission of an advertisement which holds some data. The second log informs of some person clicking the ad, which holds exactly the same data as emission and a few extra fields. Emission log has no data that a click does not contain.

In the real world, we might consider both actions, viewing and clicking an ad to be separate but similar. So by modeling the real world, we could create a base "Log" class that we would extend to "ClickLog" and "EmissionLog" classes, like below:

struct Log {
int x;
int y;
int z;
}
struct EmissionLog : public Log {}
struct ClickLog : public Log {
float q;
}

Above example maps how the system works in the real world quite well. Emitting an advertisement is completely different than someone clicking it. However, such a choice doesn't convey an important information. In our application, everything that can process emission logs can work on clicks. We can use the same classes to process both, but only some click processors cannot work on emissions, because of difference in data.

In our application, different than in real world, our ClickLog is an extension of EmissionLog. It can be processed in the same way, using the same classess that operate on EmissionLogs. If you extend clicks from emission logs, you inform your colleagues that everything that can happen to emission can happen to clicks without them needing to know about all the possible log processors in the application.

struct EmissionLog {
int x;
int y;
int z;
}
struct ClickLog : public EmissionLog {
float q;
}

18. Type your variables if you can, even if you don’t have to.


You can skip this one if you only write in statically typed languages. In dynamically typed languages like PHP or Javascript it can be very hard to understand what a piece of code is supposed to do without dumping the contents of the variables. For the same reason, the code can be very unpredictable when a single variable can be an object, an array or a null depending on some conditions. Allow the least amount of types of your variables as possible in your function parameters. The solutions are available. PHP can have typed arguments and return types since version 7 and you can pick up Typescript instead of clean Javascript. It helps with code readability and prevents dumb mistakes.

If you don't have to, don't allow nulls either. Null is an abomination. It has to be explicitly checked for existence to avoid fatal errors which require unnecessary code. Things are even more dreadful in Javascript with it's null and undefined. Mark variables that can be null, so you inform your peers:

// PHP >= 7.1
function get(?int count): array {
//...
}
// Typescript
interface IUser = {
name?: string; // name field might not be available
type: number;
}

19. Write tests.


As the years go by and we manage to avoid burnout, we improve to the point when we can map even complex features in our mind and implement it without checking if our code works until the first draft is fully implemented. At that point, it can feel like a bit of a waste of time writing in TDD cycles, as it's a bit slower to check every single thing before writing it. It is a good practice to write integration tests that make sure your whole feature works as required, simply because you will likely leave some small errors behind and you can run the check in miliseconds.

If you are not experienced yet with your language or library and you try a different angle often when looking for ways to solve your problem, you can benefit greatly from writing tests. It encourages splitting your work into more managable chunks. Integration tests explain what kind of issues does your code solve very quickly, which may provide the information quicker than a generic implementation. A simple "this input expects this output" can speed up the process of understanding the application.

20. Use static code analysis tools.


There are many open source static code analysis tools available. Quite a lot is also provided real-time by advanced IDEs. They help to keep your projects on track. You can automate some in your repository pipelines to be run on every commit in a docker environment.

Solid choices for PHP:
- Copy / Paste detector
- PHP Mess Detector - checks potential bugs and complexity
- PHP Code Sniffer - checks coding standards.
- PHPMetrics - static analysis tool with dashboard and charts.

Javascript:
- JsHint / JsLint - discover errors and potential issues, can be integrated with an IDE for real-time analysis.
- Plato - source code visualization and complexity tool.

C++:
- Cppcheck - detects bugs and undefined behavior.
- OClint - improves code quality.

Multilanguage support:
- pmd - mess detector.

21. Human code reviews.


Code reviews is just another programmer looking through your code to find mistakes and help improve quality of software. As much as they can help the overall quality of the application and allow for a flow of knowledge in the team, they are only useful if everyone is open to constructive criticism. Sometimes people reviewing want to enforce their vision and experience and won't accept a different view, which can also be hard to swallow.

It can be hard to pull off depending on the team environment, but the gains can be incredible. The biggest and cleanest application I have ever participated in coding was done with very thorough code reviews.

22. Comments.


You may have already noticed that I like to keep the rules of coding simple, so they are easy to follow by everyone on the team. The same is with comments.

I believe comments should be added to every function, including constructors, every class property, every static constant and every class. It is a matter of discipline. When you allow laziness by allowing exceptions to commenting when "something doesn't require comments because it's self-explanatory" then laziness is often what you will get.

Whatever you are thinking about when implementing the feature (relevant to work!) is a good thing to write in the comments. Especially how everything works, how a class is used, what is the purpose of this enum and so on. The purpose is very important as it's hard to explain just through proper naming unless someone already knows the conventions beforehand.

I understand that "InjectorToken" makes complete sense to you and you might consider it "self-explanatory". Quite frankly it's a great name. But what I want to know when I view this class is - what is this token for, what does it do, how can I use it and what is this Injector thingy. It would be perfect to see that in the comments so nobody will have to look all over the application, right?

23. Documentation.


I know, I know, I hate writing documentations too. If you write everything you know in your comments, then this point could potentially be generated automatically by some tool. Still, documentation can give you a quick way to look up important information about how your application should work.

You can use Doxygen for automatic documentation generation.

Conclusion


The reason why this article is a set of guidelines and not rules is that I believe that there are many ways to be right. If you are convinced beyond doubt that everything should be abstracted, have at it. If you believe SOLID principles are to be used in every application, or that if a solution is not done through well known design pattern then it's immediately bad, it's fine.

Choose the path that is right for you and your team and stick with it. And if you are ever in an experimental mood, try some of the things I mentioned in this article. Hopefully, it will improve the quality of your work. Thanks for reading and don't forget to share if you found it interesting!

Protecting Mozilla’s GitHub Repositories from Malicious Modification

$
0
0

At Mozilla, we’ve been working to ensure our repositories hosted on GitHub are protected from malicious modification. As the recent Gentoo incident demonstrated, such attacks are possible.

Mozilla’s original usage of GitHub was an alternative way to provide access to our source code. Similar to Gentoo, the “source of truth” repositories were maintained on our own infrastructure. While we still do utilize our own infrastructure for much of the Firefox browser code, Mozilla has many projects which exist only on GitHub. While some of those project are just experiments, others are used in production (e.g. Firefox Accounts). We need to protect such “sensitive repositories” against malicious modification, while also keeping the barrier to contribution as low as practical.

This describes the mitigations we have put in place to prevent shipping (or deploying) from a compromised repository. We are sharing both our findings and some tooling to support auditing. These add the protections with minimal disruption to common GitHub workflows.

The risk we are addressing here is the compromise of a GitHub user’s account, via mechanisms unique to GitHub. As the Gentoo and other incidents show, when a user account is compromised, any resource the user has permissions to can be affected.

Overview

GitHub is a wonderful ecosystem with many extensions, or “apps”, to make certain workflows easier. Apps obtain permission from a user to perform actions on their behalf. An app can ask for permissions including modifying or adding additional user credentials. GitHub makes these permission requests transparent, and requires the user to approve via the web interface, but not all users may be conversant with the implications of granting those permissions to an app. They also may not make the connection that approving such permissions for their personal repositories could grant the same for access to any repository across GitHub where they can make changes.

Excessive permissions can expose repositories with sensitive information to risks, without the repository admins being aware of those risks. The best a repository admin can do is detect a fraudulent modification after it has been pushed back to GitHub. Neither GitHub nor git can be configured to prevent or highlight this sort of malicious modification; external monitoring is required.

Implementation

The following are taken from our approach to addressing this concern, with Mozilla specifics removed. As much as possible, we borrow from the web’s best practices, used features of the GitHub platform, and tried to avoid adding friction to the daily developer workflows.

Organization recommendations:

  • 2FA must be required for all members and collaborators.
  • All users, or at least those with elevated permissions:
    • Should have contact methods (email, IM) given to the org owners or repo admins. (GitHub allows Users to hide their contact info for privacy.)
    • Should understand it is their responsibility to inform the org owners or repo admins if they ever suspect their account has been compromised. (E.g. laptop stolen)

Repository recommendations:

  • Sensitive repositories should only be hosted in an organization that follows the recommendations above.
  • Production branches should be identified and configured:
    • To not allow force pushes.
    • Only give commit privileges to a small set of users.
    • Enforce those restrictions on admins & owners as well.
    • Require all commits to be GPG signed, using keys known in advance.

Workflow recommendations:

  • Deployments, releases, and other audit-worthy events, should be marked with a signed tag from a GPG key known in advance.
  • Deployment and release criteria should include an audit of all signed commits and tags to ensure they are signed with the expected keys.

There are some costs to implementing these protections – especially those around the signing of commits. We have developed some internal tooling to help with auditing the configurations, and plan to add tools for auditing commits. Those tools are available in the mozilla-services/GitHub-Audit repository.

Image of README contents

Here’s an example of using the audit tools. First we obtain a local copy of the data we’ll need for the “octo_org” organization, and then we report on each repository:

$ ./get_branch_protections.py octo_org
2018-07-06 13:52:40,584 INFO: Running as ms_octo_cat
2018-07-06 13:52:40,854 INFO: Gathering branch protection data. (calls remaining 4992).
2018-07-06 13:52:41,117 INFO: Starting on org octo_org. (calls remaining 4992).
2018-07-06 13:52:59,116 INFO: Finished gathering branch protection data (calls remaining 4947).

Now with the data cached locally, we can run as many reports as we’d like. For example, we have written one report showing which of the above recommendations are being followed:

$ ./report_branch_status.py --header octo_org.db.json
name,protected,restricted,enforcement,signed,team_used
octo_org/react-starter,True,False,False,False,False
octo_org/node-starter,False,False,False,False,False

We can see that only “octo_org/react-starter” has enabled protection against force pushes on it’s production branch. The final output is in CSV format, for easy pasting into spreadsheets.

How you can help

We are still rolling out these recommendations across our teams, and learning as we go. If you think our Repository Security recommendations are appropriate for your situation, please help us make implementation easier. Add your experience to the Tips ‘n Tricks page, or open issues on our GitHub-Audit repository.

Sony Finally Admits It Doesn’t Own Bach

$
0
0

Here’s the thing about different people playing the same piece of music: sometimes, they’re going to sound similar. And when music is by a composer who died 268 years ago, putting his music in the public domain, a bunch of people might record it and some of them might put it online. In this situation, a combination of copyright bots and corporate intransigence led to a Kafkaesque attack on music.

Musician James Rhodes put a video of himself playing Bach on Facebook. Sony Music Entertainment claimed that 47 seconds of that performance belonged to them. Facebook muted the video as a result.

So far, this is stupid but not unusually stupid in the world of takedowns. It’s what happened after Rhodes got Sony’s notice that earned it a place in the Hall of Shame.

One argument in favor of this process is that there are supposed to be checks and balances. Takedown notices are supposed to only be sent by someone who owns the copyright in the material and actually believes that copyright’s been infringed. And if a takedown notice is wrong, a counter-notice can be sent by someone explaining that they own the work or that it’s not infringement.

Counter-notices have a lot of problems, not the least of which is that the requirements are onerous for small-time creators, requiring a fair bit of personal information. There’s always the fear that, even for someone who knows they own the work, that the other side will sue them anyway, which they cannot afford.

Rhodes did dispute the claim, explaining that “this is my own performance of Bach. Who died 300 years ago. I own all the rights.” Sony rejected this reasoning.

While we don’t know for sure what Sony’s process is, we can guess that a copyright bot, or a human acting just as mechanically, was at the center of this mess. A human doing actual analysis would have looked at a video of a man playing a piece of music older than American copyright law and determined that it was not something they owned. It almost feels like an automatic response also rejected Rhodes’ appeal, because we certainly hope a thoughtful person would have received his notice and accepted it.

Rhodes took his story to Twitter, where it picked up some steam, and emailed the heads of Sony Classical and Sony’s public relations, eventually getting his audio restored. He tweeted“What about the thousands of other musicians without that reach…?” He raises a good point.

None of the supposed checks worked. Public pressure and the persistence of Rhodes was the only reason this complaint went away, despite how the rules are supposed to protect fair use and the public domain.

How manymoreways do we need to say that copyright bots and filters don’t work? That mandating them, as the European Union is poised to do, is dangerous and shortsighted? We hear about these misfires roughly the same way they get resolved: because they generate enough noise. How many more lead to a creator’s work being taken down with no recourse?

The Most Notorious Towing Company in Chicago Gets the Boot

$
0
0

CHICAGO—In a city known for gangsters, bootleggers and corrupt politicians, residents will tell you the most reviled actor on the North Side is a tow-truck company.

For more than half a century, Chicagoans have said Lincoln Towing Service—locally known as the Lincoln Park Pirates—has hauled away cars for no reason, overcharged motorists to get them back and taunted owners who complained.

One...


Thousands of scientists publish a paper every five days

$
0
0

Authorship is the coin of scholarship — and some researchers are minting a lot. We searched Scopus for authors who had published more than 72 papers (the equivalent of one paper every 5 days) in any one calendar year between 2000 and 2016, a figure that many would consider implausibly prolific1. We found more than 9,000 individuals, and made every effort to count only ‘full papers’ — articles, conference papers, substantive comments and reviews — not editorials, letters to the editor and the like. We hoped that this could be a useful exercise in understanding what scientific authorship means.

We must be clear: we have no evidence that these authors are doing anything inappropriate. Some scientists who are members of large consortia could meet the criteria for authorship on a very high volume of papers. Our findings suggest that some fields or research teams have operationalized their own definitions of what authorship means.

The vast majority of hyperprolific authors (7,888 author records, 86%) published in physics. In high-energy and particle physics, projects are done by large international teams that can have upwards of 1,000 members. All participants are listed as authors as a mark of membership of the team, not for writing or revising the papers. We therefore excluded authors in physics.

Of what remained, 909 author records were Chinese or Korean names. Because Scopus disambiguates Chinese and Korean names imperfectly, these may have wrongly combined distinct individuals. For 2016 (when disambiguation had improved for Chinese and Korean names), at least 12, and possibly more than 20, authors based in China were hyperprolific, the largest number from any country that year. We believe that this could be connected to Chinese policies that reward publication with cash or to possible corruption2,3.

Because of the disambiguation issues, we excluded these names from further analysis, as well as group names and cases in which we found errors (such as journalistic news items misclassified as full articles), duplicate entries, or conference papers misassigned to an organizer.

This left 265 authors (see Supplementary Information). The number of hyperprolific authors (after our exclusions) grew about 20-fold between 2001 and 2014, and then levelled off (see ‘Hyperprolific authors proliferate’). Over the same period, the total number of authors increased by 2.5-fold.

We e-mailed all 265 authors asking for their insights about how they reached this extremely productive class. The 81 replies are provided in the Supplementary Information. Common themes were: hard work; love of research; mentorship of very many young researchers; leadership of a research team, or even of many teams; extensive collaboration; working on multiple research areas or in core services; availability of suitable extensive resources and data; culmination of a large project; personal values such as generosity and sharing; experiences growing up; and sleeping only a few hours per day.

Source: J. P. A. Ioannidis, R. Klavans & K. W. Boyack

About half of the hyperprolific authors were in medical and life sciences (medicine n = 101, health sciences n = 11, brain n = 17, biology n = 6, infectious diseases n = 3). When we excluded conference papers, almost two-thirds belonged to medical and life sciences (86/131). Among the 265, 154 authors produced more than the equivalent of one paper every 5 days for 2 or more calendar years; 69 did so for 4 or more calendar years. Papers with 10–100 authors are common in these CVs, especially in medical and life sciences, but papers with the hundreds of authors seen in particle physics are uncommon.

Materials scientist Akihisa Inoue, former president of Tohoku University in Japan and a member of multiple prestigious academies, holds the record. He met our definition of being hyperprolific for 12 calendar years between 2000 and 2016. Since 1976, his name appears on 2,566 full papers indexed in Scopus. He has also retracted seven papers found to be self-duplications of previously published work4. We searched for news articles in Google detailing retractions for the next 20 most hyperprolific authors and found only one other author (Jeroen Bax) to have one retracted paper.

The 265 hyperprolific authors worked in 37 countries, with the highest number in the United States (n = 50), followed by Germany (n = 28) and Japan (n = 27). The proportion from the United States (19%) is roughly similar to its share of published science. Germany and Japan are over-represented. There were disproportionally more hyperprolific authors in Malaysia (n = 13) and Saudi Arabia (n = 7), countries both known to incentivize publication with cash rewards5.

Hyperprolific authors also tended to cluster in particular institutions, often as part of a common study. For example, Erasmus University Rotterdam in the Netherlands had nine hyperprolific authors, more than any other institution. Seven of them co-authored mostly papers related to the Rotterdam study, a nearly 30-year-old epidemiological project, or its successor Generation R study, which have followed multiple health parameters in thousands of older adults and yielded thousands of publications. Five hyperprolific investigators from Harvard University in Cambridge, Massachusetts, also often co-authored papers related to cohort studies. Eleven hyperprolific authors across different institutions were on one large cohort study, the European Prospective Investigation on Cancer and Nutrition; other large epidemiological studies were also represented. Hyperprolific authors were also concentrated in cardiology and crystallography.

These biological and medical disciplines with many hyperprolific authors exhibit different patterns from those found in particle and high-energy physics. Papers with hundreds to thousands of authors are the norm across a community of many thousands of scientists working in projects based at CERN, Europe’s particle-physics laboratory near Geneva, Switzerland. In crystallography, papers tend to have few co-authors. In epidemiology and cardiology, long lists of authors appear only in relationship to specific research teams that seem to have a tradition of extensive authorship lists.

This raises the question of what authorship entails. The US National Institutes of Health, for example, has guidelines on the activities that qualify: actively supervising, designing and doing experiments, and data acquisition and analysis outside “very basic” work plus drafting the manuscript. Collecting funds or distant mentorship do not qualify. Most of the 6,000 authors in a recent survey across many geographical regions and disciplines felt that drafting a paper, interpreting results and analysing data should qualify for authorship, but attitudes varied by region and field6.

Authorship criteria

Perhaps the most widely established requirements for authorship are the Vancouver criteria established by the International Committee of Medical Journal Editors in 1988. These specify that authors must do all of four things to qualify: play a part in designing or conducting experiments or processing results; help to write or revise the manuscript; approve the published version; and take responsibility for the article’s contents.

The International Committee of Medical Journal Editors does not count supervision, mentoring or obtaining funding as sufficient for authorship. We did observe that some authors seemed to become hyperprolific on becoming full professors, department chairs or both. It is common and perhaps expected for scientists who assume leadership roles in large centres to accelerate their productivity. For example, clinical cardiologists publish more papers after they assume director roles (despite heavy clinical and administrative duties). Occasionally, the acceleration is stunning: at the peak of their productivity, some cardiologists publish 10 to 80 times more papers in one year compared with their average annual productivity when they were 35–42 years old. There was also often a sharp decrease after passing the chair to a successor. Another study noted similar patterns two decades ago7.

One unexpected result was that some hyperprolific authors placed many publications in a single journal. Prominent in this regard were Acta Crystallographica Section E: Structure Reports Online (relaunched in 2014 as Section E: Crystallographic Communications, with brief structural data reports now published in IuCrData) and Zeitschrift für Kristallographie New Crystal Structures. Three authors have each published more than 600 articles in the former (Hoong-Kun Fun, Seik Weng Ng and Edward Tiekink); three authors have each published more than 400 papers in the latter (Karl Peters, Eva Maria Peters and Edward Tiekink). Three other authors (Anne Marie Api, Charlene Letizia, Sneha Bhatia) published many papers in single supplement issues of Food and Chemistry Toxicology focused on reviews of fragrance materials.

Journals indexed in Scopus are generally considered to be quality journals. The citation impact of hyperprolific authors was usually high, but there was large variability: with a median of 19,805 citations per author (range: 380 to 200,439). The median number of full papers per hyperprolific author in 2000–2016 was 677; across all hyperprolific authors, last author positions accounted for 42.5%, first author positions for 7.1%, and single authorships for 1.4%. Across the years, the median proportion of papers with middle author positions (that is, not a single, first or last author) was 51%, but varied from 2.1% to 98.5% for individual authors.

Our work to identify hyperprolific authors is admittedly crude. It is mainly intended to raise the larger question of what authorship entails. Whether and how authorship is justified unavoidably varies for each author and each paper, and norms differ by field. It is likely that sometimes authorship can be gamed, secured through coercion or provided as a favour. We could not assess these patterns in our data. We did not examine contributorship statements8, which are not archived in Scopus. Nevertheless, even contributorship statements can be gamed and might not be accurate.

Further work is needed to explore how to best normalize these data and what is the optimal level of normalization: for example, adjusting for wide discipline, relatively narrow field and/or highly specific research team.

What authors say

To better understand authorship norms, we e-mailed a survey to the 81 hyperprolific authors of 2016 (see Supplementary Information). We asked whether they fulfilled all four Vancouver criteria. Of the 27 who completed the survey, most said they did not (see ‘Survey’). Almost all the responders were from US and European institutions. The only two responders from elsewhere stated that they failed Vancouver criteria in most of their papers. It is likely that the survey underestimates the proportion not meeting Vancouver criteria.

Survey

One-third of the 81 authors identified as hyperprolific in 2016 replied when asked how often they met each of 4 criteria established for authorship of medical studies. Of the 27 responders, 19 admitted they had not met at least 1 criterion more than 25% of the time. Eleven wrote that they had not met two or more criteria upwards of 25% of the time.

• Substantial contributions to the conception or design of the work; or the acquisition, analysis or interpretation of the data for the work (9 of 27 met this criterion in less than 75% of their papers).

• Drafting the work or revising it critically for important intellectual content (9 of 27 met this criterion in less than 75% of their papers).

• Final approval of the version to be published (3 out of 27 met this criterion in less than 75% of their papers).

• Agreement to be accountable for all aspects of the work (14 out of 27 met this criterion in less than 75% of their papers).

Not all authors had approved the final versions of their own papers, but all considered approval of the final version necessary for authorship. Fifty-nine per cent (16 of 27) said that they had contributed more than any other listed author for 25 or more of the papers they authored in 2016.

Responses to the question “What, in your own words, do you think should be required for authorship?” generally reflected a requirement for “significant contributions”, but also dissatisfaction with how authorship was assessed. One scientist said, “I personally don’t count them as ‘my papers’ and don’t have them on my CV as such, as there is a distinction between being a ‘named author’ versus a ‘consortium member’ authorship.” Another observed that authorship was often awarded for seniority, and another that better distinctions were essential. “I think there should be levels of authorship — and not those implied by order!” It will be interesting to monitor how innovations in assigning credit, such as data citation or formal author contribution taxonomies, could alter authorship conventions. Authorship norms can vary within each field and even within each team. For example, some teams in epidemiology and cardiology apparently offer authorship more generously; others stick to stricter (and probably more appropriate) authorship criteria. For a similar task and contribution, one cohort study might credit 20 authors, another might give credit only to 3 people or none. For example, genome-wide studies typically include many dozens of authors. As a dramatic counter-example, one recent publication of a genome-wide study had only one author9, and apparently that researcher did the same amount of work for which perhaps dozens would get authorship credit in similar papers spearheaded by different teams. Some evidence suggests that the increase in the average number of authors per paper does not reflect so much the genuine needs of team science as the pressure to ‘publish or perish’10.

Widely used citation and impact metrics should be adjusted accordingly. For instance, if adding more authors diminished the credit each author received, unwarranted multi-authorship might go down. We found that the 30 hyperprolific authors who seemed to benefit the most from co-authorship numbered 6 cardiologists and 24 epidemiologists (including those working on population genetics studies). (For these scientists, the ratio of their Hirsch H index to their co-authorship-adjusted Schreiber Hm index was higher; see Supplementary Information.)

Overall, hyperprolific authors might include some of the most energetic and excellent scientists. However, such modes of publishing might also reflect idiosyncratic field norms, to say the least. Loose definitions of authorship, and an unfortunate tendency to reduce assessments to counting papers, muddy how credit is assigned. One still needs to see the total publishing output of each scientist, benchmarked against norms for their field. And of course, there is no substitute for reading the papers and trying to understand what the authors have done.

Jeff Bezos launches $2B fund to help homeless families

$
0
0

In a tweet this morning, Amazon founder (and the world’s richest man) Jeff Bezos announced that he and his wife were creating a $2 billion fund to finance a network of nonprofit preschools and donate funds to organizations helping homeless families.

“The Day 1 Families Fund will issue annual leadership awards to organizations and civic groups doing compassionate, needle moving work to provide shelter and hunger support to address the immediate needs of young families,” Bezos writes in a statement.

There’s also a Day 1 Academies Fund that will launch a network of free, Montessori-inspired schools in low-income neighborhoods.

Bezos said the schools will employ the “same set of principles that have driven .” Which, for Bezos, means an intense focus on the customer.

The funds are called the “Day 1” funds because they align with Bezos’ stated philosophy of “maintaining a Day 1 mentality.”

Starting a network of free schools for underprivileged children and giving out money to help organizations that are working to alleviate the needs of the nation’s homeless are inarguably good things, but it’s unclear whether these individual steps can work to address more systemic problems that underlie problems of homelessness and a lack of educational opportunity that exists more broadly in the country.

Perhaps Bezos was inspired to battle the nation’s homeless plight when he saw this report on Vickie Shannon Allen, an Amazon employee who became homeless after a workplace accident cost her her job.

It’s also a bit rich to see Bezos tackle the issue of homelessness after his company was the mustache twirling arch nemesis of a bill in Seattle that would have created a tax to finance homeless shelters and low-income housing.

Fortune has more on Amazon’s work to kill the measure:

Amazon opposed the tax, originally floated at $500 a year for each of its Seattle employees. To signal its displeasure, the company halted construction on a new tower, and suggested it might sublet 722,000 square feet it had just leased in a signature downtown building. When the council approved a reduced $275 tax, Amazon restarted construction on the tower. But it also joined Starbucks and other local employers to fund a group, No Tax on Jobs, that raised over $300,000 to pay for signature gatherers for a referendum to repeal the head tax. In a statement after the vote, Amazon vice president Drew Herdener said, “Today’s vote by the Seattle City Council to repeal the tax on job creation is the right decision for the region’s economic prosperity.”

With the new fund, Bezos joins a long line of incredibly mega-rich people (cf. Chan-Zuckerberg and Gates Foundations… and who are taking it upon themselves to fund programs for social good.

It’s part of philanthropy’s long history of ignoring broader structural issues as a way for billionaires to treat their contributions as a gift rather than an obligation.

Here’s Bezos’ tweet announcing the new funds.

Show HN: Uplink – Build Reusable Objects for Consuming Web APIs

$
0
0

Bits in a Float, and Infinity, NaN, and Denormal (2012)

$
0
0
CS 301 LectureCS 301 Lecture, Dr. Lawlor

Bits in a Floating-Point Number

Floats represent continuous values.  But they do it using discrete bits.

A "float" (as defined by IEEE Standard 754) consists of three bitfields:

Sign
Exponent
Fraction (or "Mantissa")
1 bit--
  0 for positive
  1 for negative
8 unsigned bits--
  127 means 20
  137 means 210
23 bits-- a binary fraction.

Don't forget the implicit leading 1!

The sign is in the highest-order bit, the exponent in the next 8 bits, and the fraction in the remaining bits.

The hardware interprets a float as having the value:

    value = (-1) sign * 2 (exponent-127) * 1.fraction

Note that the mantissa has an implicit leading binary 1 applied.  The 1 isn't stored, which actually causes some headaches.  (Even worse, if the exponent field is zero, then it's an implicit leading 0; a "denormalized" number as we'll talk about on Wednesday.)

For example, the value "8" would be stored with sign bit 0, exponent 130 (==3+127), and mantissa 000... (without the leading 1), since:

    8 = (-1) 0 * 2 (130-127) * 1.0000....

You can stare at the bits inside a float by converting it to an integer.  The quick and dirty way to do this is via a pointer typecast, but modern compilers will sometimes over-optimize this, especially in inlined code:

void print_bits(float f) {
int i=*reinterpret_cast<int *>(&f); /* read bits with "pointer shuffle" */
std::cout<<" float "<<std::setw(10)<<f<<" = ";
for (int bit=31;bit>=0;bit--) {
if (i&(1<<bit)) std::cout<<"1"; else std::cout<<"0";
if (bit==31) std::cout<<" ";
if (bit==23) std::cout<<" (implicit 1).";
}
std::cout<<std::endl;
}

int foo(void) {
print_bits(0.0);
print_bits(-1.0);
print_bits(1.0);
print_bits(2.0);
print_bits(4.0);
print_bits(8.0);
print_bits(1.125);
print_bits(1.25);
print_bits(1.5);
print_bits(1+1.0/10);
return sizeof(float);
}

(Try this in NetRun now!)

The official way to dissect the parts of a float is using a "union" and a bitfield like so:
/* IEEE floating-point number's bits:  sign  exponent   mantissa */
struct float_bits {
unsigned int fraction:23; /**< Value is binary 1.fraction ("mantissa") */
unsigned int exp:8; /**< Value is 2^(exp-127) */
unsigned int sign:1; /**< 0 for positive, 1 for negative */
};

/* A union is a struct where all the fields *overlap* each other */
union float_dissector {
float f;
float_bits b;
};

float_dissector s;
s.f=8.0;
std::cout<<s.f<<"= sign "<<s.b.sign<<" exp "<<s.b.exp<<" fract "<<s.b.fraction<<"\n";
return 0;

(Executable NetRun link)

I like to joke that a union misused to convert bits between incompatible types is an "unholy union".

In addition to the 32-bit "float", there are several other different sizes of floating-point types:

C Datatype
Size
Approx. Precision
Approx. Range
Exponent Bits
Fraction Bits
+-1 range
float
4 bytes (everywhere)
1.0x10-7
1038
8
23
224
double
8 bytes (everywhere)
2.0x10-15
10308
11
52
253
long double
12-16 bytes (if it even exists)
2.0x10-20
104932
15
64
265

Nowadays floats have roughly the same performance as integers: addition, subtraction, or multiplication all take about a nanosecond.  That is, floats are now cheap, and you can consider using floats for all sorts of stuff--even when you don't care about fractions!  The advantages of using floats are:
  • Floats can store fractional numbers.
  • Floats never overflow; they hit "infinity" as explored below.
  • "double" has more bits than "int" (but less than "long").

Normal (non-Weird) Floats

Recall that a "float" as as defined by IEEE Standard 754 consists of three bitfields:
Sign
Exponent
Mantissa (or Fraction)
1 bit--
  0 for positive
  1 for negative
8 bits--
  127 means 20
  137 means 210
23 bits-- a binary fraction.

The hardware usually interprets a float as having the value:

    value = (-1) sign * 2 (exponent-127) * 1.fraction

Note that the mantissa normally has an implicit leading 1 applied.  

Weird: Zeros and Denormals

However, if the "exponent" field is exactly zero, the implicit leading digit is taken to be 0, like this:

   value = (-1) sign * 2 (-126) * 0.fraction

Supressing the leading 1 allows you to exactly represent 0: the bit pattern for 0.0 is just exponent==0 and fraction==00000000 (that is, everything zero).  If you set the sign bit to negative, you have "negative zero", a strange curiosity.  Positive and negative zero work the same way in arithmetic operations, and as far as I know there's no reason to prefer one to the other.  The "==" operator claims positive and negative zero are the same!

If the fraction field isn't zero, but the exponent field is, you have a "denormalized number"--these are numbers too small to represent with a leading one.  You always need denormals to represent zero, but denormals (also known as "subnormal" values) also provide a little more range at the very low end--they can store values down to around 1.0e-40 for "float", and 1.0e-310 for "double". 

See below for the performance problem with denormals.

Weird: Infinity

If the exponent field is as big as it can get (for "float", 255), this indicates another sort of special number.  If the fraction field is zero, the number is interpreted as positive or negative "infinity".  The hardware will generate "infinity" when dividing by zero, or when another operation exceeds the representable range.
float z=0.0;
float f=1.0/z;
std::cout<<f<<"\n";
return (int)f;

(Try this in NetRun now!)

Arithmetic on infinities works just the way you'd expect:infinity plus 1.0 gives infinity, etc. (See tables below).  Positive and negative infinities exist, and work as you'd expect.  Note that while divide-by-integer-zero causes a crash (divide by zero error), divide-by-floating-point-zero just happily returns infinity by default.

You can also get to infinity by adding a number to itself repeatedly, for example:

float x=1.0;
while (true) {
float old_x=x;
x=x+x;
std::cout<<x<<"\n";
if (x==old_x) {
std::cout<<"Finally hit "<<x<<" and stopped.\n";
return 0;
}
}

(Try this in NetRun now!)

This is the same type of infinity you'd get by dividing by zero.

Weird: NaN

If you do an operation that doesn't make sense, like:
  • 0.0/0.0 (neither zero nor infinity, because we'd want (x/x)==1.0; but not 1.0 either, because we'd want (2*x)/x==2.0...)
  • infinity-infinity (might cancel out to anything)
  • infinity*0
The machine just gives a special "error" number called a "NaN" (Not-a-Number).  The idea is if you run some complicated program that screws up, you don't want to get a plausible but wrong answer like "4" (like we get with integer overflow!); you want something totally implausible like "nan" to indicate an error happened.   For example, this program prints "nan" and returns -2147483648 (0x80000000):
float f=sqrt(-1.0);
std::cout<<f<<"\n";
return (int)f;

(Try this in NetRun now!)

This is a "NaN", which is represented with a huge exponent and a *nonzero* fraction field.  Positive and negative nans exist, but like zeros both signs seem to work the same.  x86 seems to rewrite the bits of all NaNs to a special pattern it prefers (0x7FC00000 for float, with exponent bits and the leading fraction bit all set to 1).

Performance impact of special values

Machines properly handle ordinary floating-point numbers and zero in hardware at full speed.

However, most modern machines *don't* handle denormals, infinities, or NaNs in hardware--instead when one of these special values occurs, they trap out to software which handles the problem and restarts the computation.  This trapping process takes time, as shown in the following program:
(Executable NetRun Link)

enum {n_vals=1000};
double vals[n_vals];

int average_vals(void) {
for (int i=0;i<n_vals-1;i++)
vals[i]=0.5*(vals[i]+vals[i+1]);
return 0;
}

int foo(void) {
int i;
for (i=0;i<n_vals;i++) vals[i]=0.0;
printf(" Zeros: %.3f ns/float\n",time_function(average_vals)/n_vals*1.0e9);
for (i=0;i<n_vals;i++) vals[i]=1.0;
printf(" Ones: %.3f ns/float\n",time_function(average_vals)/n_vals*1.0e9);
for (i=0;i<n_vals;i++) vals[i]=1.0e-310;
printf(" Denorm: %.3f ns/float\n",time_function(average_vals)/n_vals*1.0e9);
float x=0.0;
for (i=0;i<n_vals;i++) vals[i]=1.0/x;
printf(" Inf: %.3f ns/float\n",time_function(average_vals)/n_vals*1.0e9);
for (i=0;i<n_vals;i++) vals[i]=x/x;
printf(" NaN: %.3f ns/float\n",time_function(average_vals)/n_vals*1.0e9);
return 0;
}

Many machines run *seriously* slower for the weird numbers.  Here are the results of the above program, in nanoseconds per float operation, on a variety of machines:

Intel P3Intel P4Core2Q6600Sandy BridgePhenom IIPPC G5MIPS R5000Intel 486
Zero4.01.61.61.10.61.02.3131.01215.8
One4.01.61.91.10.61.02.2130.6864.8
Denorm335.1295.5517.9130.046.3109.010.124437.03879.0
Infinity191.9706.4346.91.10.61.02.1153.22558.2
NaN206.2772.2356.31.10.61.02.110924.13103.7

Generally, no machine has any performance penalty for zero, despite it being somewhat "weird".

Virtually all current machines have some performance penalty for denormalized numbers, sometimes hundreds of times slower than ordinary numbers.

Infinities and NaN are fast again on most recent machines.

My friends at Illinois and I wrote a paper on this with many more performance details.


Bonus: Arithmetic Tables for Special Floating-Point Numbers

These tables were computed for "float", but should be identical with any number size on any IEEE machine (which virtually everything is).  "big" is a large but finite number, here 1.0e30.  "lil" is a denormalized number, here 1.0e-40. "inf" is an infinity.  "nan" is a Not-A-Number.  Here's the source code to generate these tables.

These all go about how you'd expect--"inf" for things that are too big (or -inf for too small), "nan" for things that don't make sense (like 0.0/0.0, or infinity times zero, or nan with anything else).

Addition

+-nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan nan nan nan nan nan nan nan nan nan nan nan nan
-inf nan-inf-inf-inf-inf-inf-inf-inf-inf-inf nan nan
-big nan-inf-2e+30-big-big-big-big-big-big 0+inf nan
-1 nan-inf-big -2-1-1-1-1 0+big+inf nan
-lil nan-inf-big-1-2e-40-lil-lil 0+1+big+inf nan
-0 nan-inf-big-1-lil -0 0+lil+1+big+inf nan
+0 nan-inf-big-1-lil 0 0+lil+1+big+inf nan
+lil nan-inf-big-1 0+lil+lil2e-40+1+big+inf nan
+1 nan-inf-big 0+1+1+1+1 2+big+inf nan
+big nan-inf 0+big+big+big+big+big+big2e+30+inf nan
+inf nan nan+inf+inf+inf+inf+inf+inf+inf+inf+inf nan
+nan nan nan nan nan nan nan nan nan nan nan nan nan
Note how infinity-infinity gives nan, but infinity+infinity is infinity.

Subtraction

--nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan nan nan nan nan nan nan nan nan nan nan nan nan
-inf nan nan-inf-inf-inf-inf-inf-inf-inf-inf-inf nan
-big nan+inf 0-big-big-big-big-big-big-2e+30-inf nan
-1 nan+inf+big 0-1-1-1-1 -2-big-inf nan
-lil nan+inf+big+1 0-lil-lil-2e-40-1-big-inf nan
-0 nan+inf+big+1+lil 0 -0-lil-1-big-inf nan
+0 nan+inf+big+1+lil 0 0-lil-1-big-inf nan
+lil nan+inf+big+12e-40+lil+lil 0-1-big-inf nan
+1 nan+inf+big 2+1+1+1+1 0-big-inf nan
+big nan+inf2e+30+big+big+big+big+big+big 0-inf nan
+inf nan+inf+inf+inf+inf+inf+inf+inf+inf+inf nan nan
+nan nan nan nan nan nan nan nan nan nan nan nan nan

Multiplication

*-nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan nan nan nan nan nan nan nan nan nan nan nan nan
-inf nan+inf+inf+inf+inf nan nan-inf-inf-inf-inf nan
-big nan+inf+inf+big1e-10 0 -0-1e-10-big-inf-inf nan
-1 nan+inf+big+1+lil 0 -0-lil-1-big-inf nan
-lil nan+inf1e-10+lil 0 0 -0 -0-lil-1e-10-inf nan
-0 nan nan 0 0 0 0 -0 -0 -0 -0 nan nan
+0 nan nan -0 -0 -0 -0 0 0 0 0 nan nan
+lil nan-inf-1e-10-lil -0 -0 0 0+lil1e-10+inf nan
+1 nan-inf-big-1-lil -0 0+lil+1+big+inf nan
+big nan-inf-inf-big-1e-10 -0 01e-10+big+inf+inf nan
+inf nan-inf-inf-inf-inf nan nan+inf+inf+inf+inf nan
+nan nan nan nan nan nan nan nan nan nan nan nan nan
Note that 0*infinity gives nan, and out-of-range multiplications give infinities.

Division

/-nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan nan nan nan nan nan nan nan nan nan nan nan nan
-inf nan nan+inf+inf+inf+inf-inf-inf-inf-inf nan nan
-big nan 0+1+big+inf+inf-inf-inf-big-1 -0 nan
-1 nan 01e-30+1+inf+inf-inf-inf-1-1e-30 -0 nan
-lil nan 0 0+lil+1+inf-inf-1-lil -0 -0 nan
-0 nan 0 0 0 0 nan nan -0 -0 -0 -0 nan
+0 nan -0 -0 -0 -0 nan nan 0 0 0 0 nan
+lil nan -0 -0-lil-1-inf+inf+1+lil 0 0 nan
+1 nan -0-1e-30-1-inf-inf+inf+inf+11e-30 0 nan
+big nan -0-1-big-inf-inf+inf+inf+big+1 0 nan
+inf nan nan-inf-inf-inf-inf+inf+inf+inf+inf nan nan
+nan nan nan nan nan nan nan nan nan nan nan nan nan
Note that 0/0, and inf/inf give NaNs; while out-of-range divisions like big/lil or 1.0/0.0 give infinities (and not errors!).

Equality

==-nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan 0 0 0 0 0 0 0 0 0 0 0 0
-inf 0+1 0 0 0 0 0 0 0 0 0 0
-big 0 0+1 0 0 0 0 0 0 0 0 0
-1 0 0 0+1 0 0 0 0 0 0 0 0
-lil 0 0 0 0+1 0 0 0 0 0 0 0
-0 0 0 0 0 0+1+1 0 0 0 0 0
+0 0 0 0 0 0+1+1 0 0 0 0 0
+lil 0 0 0 0 0 0 0+1 0 0 0 0
+1 0 0 0 0 0 0 0 0+1 0 0 0
+big 0 0 0 0 0 0 0 0 0+1 0 0
+inf 0 0 0 0 0 0 0 0 0 0+1 0
+nan 0 0 0 0 0 0 0 0 0 0 0 0
Note that positive and negative zeros are considered equal, and a "NaN" doesn't equal anything--even itself!

Less-Than

<-nan-inf-big-1-lil-0+0+lil+1+big+inf+nan
-nan 0 0 0 0 0 0 0 0 0 0 0 0
-inf 0 0+1+1+1+1+1+1+1+1+1 0
-big 0 0 0+1+1+1+1+1+1+1+1 0
-1 0 0 0 0+1+1+1+1+1+1+1 0
-lil 0 0 0 0 0+1+1+1+1+1+1 0
-0 0 0 0 0 0 0 0+1+1+1+1 0
+0 0 0 0 0 0 0 0+1+1+1+1 0
+lil 0 0 0 0 0 0 0 0+1+1+1 0
+1 0 0 0 0 0 0 0 0 0+1+1 0
+big 0 0 0 0 0 0 0 0 0 0+1 0
+inf 0 0 0 0 0 0 0 0 0 0 0 0
+nan 0 0 0 0 0 0 0 0 0 0 0 0
Note that "NaN" returns false to all comparisons--it's neither smaller nor larger than the other numbers.

The Effect of Waking Up Early on Happiness Quantified

$
0
0

Ever since I published the biggest study on personal happiness and sleep deprivation, I have started to ask myself a lot of follow-up questions. Sleep is becoming one of the most important happiness factors for me, which is why I want to do everything I can to understand and control it more.

And that's what I plan to do in this post. In this follow-up study on personal happiness data and sleep, I embark on a journey to find out if waking up early has an effect on my happiness. I want to find out if there's a way for me to have happy mornings for the rest of my life.

I have analyzed my data and come to the conclusion that I need to wake up between 7 and 8 AM in order to be happy. It's one of the many observations that I've been able to make by analyzing this early morning happiness data.

Table of contents

Introduction

As much as sleep has been studied already, it still is one of the most uncharted areas of science. Meanings tend to vary wildly depending on which source you consult. Some journals state that sleep deprivation can actually cure depression. How about that?

I beg to differ.

According to my analysis, sleep deprivation has never resulted in a happier day for me. In fact, sleep deprivation tends to increase the likelihood that I experience a bad day.

I came to this conclusion - and many others - after analyzing about 1,000 days of my personal happiness and sleep data.

What I want to find out next is not related to sleep deprivation. I want to see if waking up early has any correlation to my happiness.

Early mornings result in happy mornings?

You've probably heard it before: waking up early allows one to be more productive and energized. There's a ton of list-icles that claim billionaires are successful because they wake up early. Therefore, you are an idiot if you don't prioritize waking up early. How can you ever be successful or happy if you don't get used to waking up early?

This is, of course, something that generates my interest.

I have all the data that I need in order to test this thesis. And so that's my goal for this follow-up post: I want to find out if waking up early does, in fact, correlate to an increased level of happiness.

Tracking happiness

For those who are new here: I track my happiness every single day, and I've been doing so for the last 5 years. I rate my happiness every day on a scale from 1 to 10, which is part of my happiness tracking method. I can use this vast amount of data to find out exactly how I can actively steer my life in the best direction possible.

The topic of today's analysis is my sleep. If I can find out whether or not waking up early is correlated to my happiness, I can use that knowledge to become happier in general.

Analyzing my sleep data

If you haven't already read my original study on my personal sleep and happiness data, I suggest you take a minute to scan through it.

If you're lazy (like me), then here's a TLDR of that article:

I've analyzed 1,000 nights of sleep using an app called SleepAsAndroid, which measures my sleep duration and quality every single night. I've used the data from this app to correlate sleep deprivation to my happiness. The result is quite obvious. Sleep deprivation does not directly result in an immediate decrease in happiness, but it does tend to do so indirectly. All of my worst days have occurred while being significantly sleep deprived.

Another observation from this analysis is that my sleep schedule is quite wacky.

I am quite the office slave, and this chart confirms it. I wake up every morning on weekdays to get my ass in the office. As a direct result, I tend to sacrifice my sweet amount of sleep in order to avoid the rush hour. You can see how that affects my rhythm. I need to catch up on my sleep deprivation just about every single weekend. As a result, I am constantly living on a social jetlag.

Those are quite some interesting observations already, which is why I really recommend you use an app like this.

Wakey wakey

Conveniently, I also use this app as an alarm. In addition to a lot of handy features - such as smart alarms and measures to prevent oversleeping - this app also stores my wake-up and alarm times!

This is just the data I need.

As I said before, I am quite a slave to the daily rat race. My commute covers one of the shittiest, most accident-prone highway stretches of the Netherlands. This is why I try to get in the office BEFORE the rush hour starts.

Which is why I set my alarms at 6:00 AM on weekdays.

I am quite the robot in the early mornings. What I mean by that is that I have a strict morning routine. I prepare my breakfast and lunch the night before, just like my shower. My alarm goes at 6:00. I almost ALWAYS snooze for 5 more seconds (I'm weak). I then get up, clean-up, get dressed, grab my food and start my engine. This way, I'm usually out of the door at 6:20. If traffic is kind to me, I'll be in the office before 7:00 AM.

This morning routine is visualized quite nicely in the following graph. Please note that this graph is scrollable!

This graph shows every single day in which I tracked my sleep and wake-up times. It shows you everything you need to know about my sleep.

At first glance, you'll probably notice how my alarm goes at 6:00 AM on most weekdays, and that I allow my alarm to snooze for about 5-10 minutes each morning.

You'll also notice that there are some gaps in the dataset, which means that I was either on holiday and unable to track my sleep, or I simply forgot.

And finally, you can probably see my wacky rhythm of stacking up sleep deprivation on weekdays, only to recover on the weekends. As I said before, this is a clear case of social jetlag.

I obviously don't set my alarms on weekends, as my weekends are sacred to me. I wouldn't want to miss my free Saturday and Sunday mornings for the world, and I do my best to AVOID any reason to set an alarm on the weekend days. It is my objective to recover from sleep deprivation during the weekends.

On the rare occasion that I fail in my objective, you can safely assume that there was nothing I could do about it...

Anyway, that's not the point of this analysis. I want to find out whether or not waking up early results in happier mornings.

And for that, I need to add my happiness ratings to this analysis.

Happy mornings?

As said before, I have tracked my happiness every single day for the past 5 years. I have used these happiness ratings - in combination with the data in the previous graph - to create the following scatter chart.

This graph shows all the 1,274 days of data that I've tracked. I started tracking my sleep in March 2015, and I've missed a couple of days, but it is still quite a bit of data to present.

I've also highlighted the mornings in which I was woken up by an alarm in red.

This chart should be able to show me any correlation between waking up early and being happy.

But as you can see, it's pretty hard to notice any trend going on.

What's funny on this chart is that the bulk of my alarms are centered around the 6 AM point. This wake-up time has really settled in my mind, as I sometimes even wake up minutes before 6:00 AM without even requiring my alarm!

What's even funnier though is that I apparently needed an alarm to wake me up at 10:28 AM on the 26th of December 2016! What a mess...

Anyway, the reason why I think it's hard to notice any correlation in this dataset is because my happiness ratings are influenced by a virtually endless list of other happiness factors!

My daily happiness ratings are a result of much more than just my wake-up times. Just have a look at the happiness factors that have influenced my happiness before. All of these happiness factors could be distorting the correlation that I'm trying to test in this analysis.

Therefore, I need to look more closely at the data that I have.

How waking up early influences my happiness

An arguably better method to plot a selection of scattered datapoints is via a box plot. I have created the following box plot in order to show whether or not waking up early has an influence on my happiness.

How much does waking up early influence my happiness?

This shows the same data as the previous scatter plot but now divided into 4 bins (boxes).

What you can see from this box plot is that my average happiness rating is the highest when I wake up between 7 and 8 AM.

Not only is the average higher, but also the rest of the distribution of happiness ratings.

Sure, the difference may look pretty small to you, but it cannot be denied that I tent to be happier on days when I wake up between 7 and 8 AM.

And that small difference looks pretty significant to me. Why? Because I know how much my happiness ratings are influenced by other happiness factors.

Sleeping in does not make me happy?

What's also interesting is that sleeping in doesn't seem to have a positive influence on my happiness. And that sounds pretty counter-intuitive to me.

You would say that sleeping in makes me pretty happy, especially since I usually look forward to not having to wake up with an alarm on the weekends.

Then why does my data not confirm this?

It might be because sleeping in means that my days are shorter.

Don't believe me? Here's a chart showing how much time I spent awake versus how early I wake up in the morning.

This data shows that I spent more time awake when I wake up early. The correlation is pretty significant and clear from this data.

This is basically a result of my tendency to build up sleep deprivation during the weekdays and recovering by sleeping in on the weekends. Even though my wake-up times vary wildly, my go-to-sleep times remain quite consistent, usually between 11 and 12 PM.

But let's get back to my busted prediction: why does sleeping in not have a positive influence on my happiness?

This is closely related to the sleep dilemma that I discussed in part 1 of this sleep analysis. Let me refresh your memory.

The dilemma of sleep and happiness

We become and stay happy by being awake, doing things we enjoy doing. Therefore, it's safe to say that our happiness ratings can only increase when we are awake. You see where this is going?

You may decide to sacrifice your sleep for the sake of spending more time on things you like. That's what I have certainly done in the past. I did it rather successfully while traveling in New Zealand: I chose to temporarily reduce my sleep duration because I wanted to travel more. I also spectacularly failed in this regard, when I had my worst day ever while burning out in Kuwait.

Somewhere between these two examples lies an optimum. And we should all try to pursue this optimum. We all want to stay awake as long as possible, to enjoy the things we enjoy doing. But we don't want to shoot ourselves in the foot by becoming seriously sleep deprived. And that is the dilemma of sleep and happiness.

What I am trying to say here is that we need to be awake in order to do things we enjoy. So therefore, spending more time awake allows us to spend more time to pursue happiness.

This is why sleeping in might not result in a higher happiness rating. On average, I spend less time awake after sleeping in, which keeps me from doing things I enjoy doing.

But what about work?

If you are keen on details, you might recount that I'm an office slave. I said so myself!

So even though I often wake up early at 6:00 AM and spend more time awake on a weekday, I still have to spend most of it inside an office. And surely, that cannot have a positive effect on my happiness, right?

Well, as I've analyzed before, my work doesn't have that much of a negative effect on my happiness! In fact, I sometimes actually enjoy working!

In addition, waking up early and spending my time in the office often gives me a sense of purpose and productivity.

And those are all feelings that have a huge indirect effect on my happiness rating.

Early mornings are happy mornings

Remember those articles that I mentioned at the start of this post, claiming that all billionaires have made it a habit to wake up early?

Well, I believe now there is some truth to those articles, even though these articles have quite a high clickbait-factor. I feel like waking up early allows me to be more productive and adds a sense of purpose or meaning to my day.

And that is reflected by my happiness ratings.

What about my alarm?

Some of you might have seen the typical "Happiness is...." quotes.

Some well-known examples:

Happiness is...

.... seeing your dog after a long time away.
.... spending time with loved ones.
.... doing something stupid and laughing about it for weeks.
.... getting a message from someone you love.

But you might have also heard of this one: "Happiness is not having to set your alarm clock for the next day."

Are all these quotes telling the truth?

Obviously, I want to test this quote as well, since I have all the data.

Correlating happiness to my alarm clock

I've created the box plot below, showing my happiness ratings on days with and without an alarm.

Before I created this chart, I was expecting that waking up with an alarm would have a negative effect on my happiness.

But it turns out that isn't the case.

Waking up with an alarm seems to not have an influence on my happiness ratings at all. The average happiness rating on days without an alarm is only 0.02 higher than days with an alarm (7.83 versus 7.81).

So the next time I am small-talking with my colleagues and the topic of "happiness is not having to set an alarm" comes up, I'll say:

No, that's FALSE, because I analyzed 1,274 days of my happiness ratings and sleep data and it turns out that I am not happier on days where I'm not woken up by an alarm! Here's the data to support this statement! *points at graphs*

But all joking aside, what am I really going to do now I know all this?

Not much, really. I will still wake up at most weekdays at 6:00 AM in order to avoid the rush hour, and I'll still continue to use weekends for sleeping in.

However, I am going to try to go to bed earlier during the weekdays (something that I find very hard). This will allow me to reduce my sleep deprivation at the end of the week, which might result in me waking up earlier on the weekends without having to set an alarm!

Some additional points to consider

  • It might not be a coincidence that I'm happiest when waking up between 7 and 8 AM since that is basically the natural rhythm of human beings. All living beings are in sync with the sun, so it seems logical that we are happiest when we are completely in sync. This gives me another idea: how much does my sleep pattern match the rhythm of the sun, and how does this influence my happiness.
  • It could very well be that my wake-up times are just a proxy in this analysis. There's a big list of happiness factors that could have a far bigger influence on my happiness than just my wake-up times. Just an example: when I'm sick, I won't wake up early to go to work and I usually sleep in. In this case, my happiness is much, much more affected by my sickness than my wake-up time. Waking up early could just as well be a proxy for another happiness factor that I'm not yet recognizing. Think about work in the office, holidays, days-of, sick days, weekend days and practically everything else as a distortion to this analysis.
  • I create a case that spending more time awake allows me to spend more time doing things I like, which is why I might be happier when I wake up early. But I have not yet analyzed this thesis as much as it deserves. I'll leave that to another one of my research posts!

Closing words

Sleep remains one of my biggest happiness factors, and I still have a long way to understanding it completely. With any luck, I'll be able to improve my sleep rhythm in such a way that I can actually use it to become happier.

I now have a vague idea on how to get there! 🙂

Now I want to hear from YOU!

What do you think about this analysis? Did it inspire you to think differently about your own sleep rhythm? Do you disagree with me and feel like alarm clocks are the purest evil on this planet?

If you have any questions about anything, please let me know in the comments below, and I’ll be happy to answer!

Cheers!

Get access to all my templates & stay up to date!

  • I'll send you a periodic update.
  • I try my best not to annoy you
  • I will NEVER share your e-mail

Musk Announces His First Space Tourist

$
0
0
Terms of Service Violation

Your usage has been flagged as a violation of our terms of service.

For inquiries related to this message please contact support. For sales inquiries, please visit http://www.bloomberg.com/professional/request-demo

If you believe this to be in error, please confirm below that you are not a robot by clicking "I'm not a robot" below.


Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie Policy.


Block reference ID:

A Deluded Banker’s Tale of Lehman’s Last Days

$
0
0
Terms of Service Violation

Your usage has been flagged as a violation of our terms of service.

For inquiries related to this message please contact support. For sales inquiries, please visit http://www.bloomberg.com/professional/request-demo

If you believe this to be in error, please confirm below that you are not a robot by clicking "I'm not a robot" below.


Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie Policy.


Block reference ID:


2008 bank bailout was unnecessary, Bernanke scared Congress into it

$
0
0

Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism.

The myth of freedom

$
0
0

Should scholars serve the truth, even at the cost of social harmony? Should you expose a fiction even if that fiction sustains the social order? In writing my latest book, 21 Lessons for the 21st Century, I had to struggle with this dilemma with regard to liberalism.

On the one hand, I believe that the liberal story is flawed, that it does not tell the truth about humanity, and that in order to survive and flourish in the 21st century we need to go beyond it. On the other hand, at present the liberal story is still fundamental to the functioning of the global order. What’s more, liberalism is now attacked by religious and nationalist fanatics who believe in nostalgic fantasies that are far more dangerous and harmful.

So should I speak my mind openly, risking that my words could be taken out of context and used by demagogues and autocrats to further attack the liberal order? Or should I censor myself? It is a mark of illiberal regimes that they make free speech more difficult even outside their borders. Due to the spread of such regimes, it is becoming increasingly dangerous to think critically about the future of our species.

I eventually chose free discussion over self-censorship, thanks to my belief both in the strength of liberal democracy and in the necessity to revamp it. Liberalism’s great advantage over other ideologies is that it is flexible and undogmatic. It can sustain criticism better than any other social order. Indeed, it is the only social order that allows people to question even its own foundations. Liberalism has already survived three big crises – the first world war, the fascist challenge in the 1930s, and the communist challenge in the 1950s-70s. If you think liberalism is in trouble now, just remember how much worse things were in 1918, 1938 or 1968.

In 1968, liberal democracies seemed to be an endangered species, and even within their own borders they were rocked by riots, assassinations, terrorist attacks and fierce ideological battles. If you happened to be amid the riots in Washington on the day after Martin Luther King was assassinated, or in Paris in May 1968, or at the Democratic party’s convention in Chicago in August 1968, you might well have thought that the end was near. While Washington, Paris and Chicago were descending into chaos, Moscow and Leningrad were tranquil, and the Soviet system seemed destined to endure for ever. Yet 20 years later it was the Soviet system that collapsed. The clashes of the 1960s strengthened liberal democracy, while the stifling climate in the Soviet bloc presaged its demise.

So we hope liberalism can reinvent itself yet again. But the main challenge it faces today comes not from fascism or communism, and not even from the demagogues and autocrats that are spreading everywhere like frogs after the rains. This time the main challenge emerges from the laboratories.

Liberalism is founded on the belief in human liberty. Unlike rats and monkeys, human beings are supposed to have “free will”. This is what makes human feelings and human choices the ultimate moral and political authority in the world. Liberalism tells us that the voter knows best, that the customer is always right, and that we should think for ourselves and follow our hearts.

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

Hacked … biometric sensors could allow corporations direct access to your inner world. Photograph: Alamy Stock Photo

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

In order to successfully hack humans, you need two things: a good understanding of biology, and a lot of computing power. The Inquisition and the KGB lacked this knowledge and power. But soon, corporations and governments might have both, and once they can hack you, they can not only predict your choices, but also reengineer your feelings. To do so, corporations and governments will not need to know you perfectly. That is impossible. They will just have to know you a little better than you know yourself. And that is not impossible, because most people don’t know themselves very well.

If you believe in the traditional liberal story, you will be tempted simply to dismiss this challenge. “No, it will never happen. Nobody will ever manage to hack the human spirit, because there is something there that goes far beyond genes, neurons and algorithms. Nobody could successfully predict and manipulate my choices, because my choices reflect my free will.” Unfortunately, dismissing the challenge won’t make it go away. It will just make you more vulnerable to it.

It starts with simple things. As you surf the internet, a headline catches your eye: “Immigrant gang rapes local women”. You click on it. At exactly the same moment, your neighbour is surfing the internet too, and a different headline catches her eye: “Trump prepares nuclear strike on Iran”. She clicks on it. Both headlines are fake news stories, generated perhaps by Russian trolls, or by a website keen on increasing traffic to boost its ad revenues. Both you and your neighbour feel that you clicked on these headlines out of your free will. But in fact you have been hacked.

Propaganda and manipulation are nothing new, of course. But whereas in the past they worked like carpet bombing, now they are becoming precision-guided munitions. When Hitler gave a speech on the radio, he aimed at the lowest common denominator, because he couldn’t tailor his message to the unique weaknesses of individual brains. Now it has become possible to do exactly that. An algorithm can tell that you already have a bias against immigrants, while your neighbour already dislikes Trump, which is why you see one headline while your neighbour sees an altogether different one. In recent years some of the smartest people in the world have worked on hacking the human brain in order to make you click on ads and sell you stuff. Now these methods are being used to sell you politicians and ideologies, too.

And this is just the beginning. At present, the hackers rely on analysing signals and actions in the outside world: the products you buy, the places you visit, the words you search for online. Yet within a few years biometric sensors could give hackers direct access to your inner world, and they could observe what’s going on inside your heart. Not the metaphorical heart beloved by liberal fantasies, but rather the muscular pump that regulates your blood pressure and much of your brain activity. The hackers could then correlate your heart rate with your credit card data, and your blood pressure with your search history. What would the Inquisition and the KGB have done with biometric bracelets that constantly monitor your moods and affections? Stay tuned.

Liberalism has developed an impressive arsenal of arguments and institutions to defend individual freedoms against external attacks from oppressive governments and bigoted religions, but it is unprepared for a situation when individual freedom is subverted from within, and when the very concepts of “individual” and “freedom” no longer make much sense. In order to survive and prosper in the 21st century, we need to leave behind the naive view of humans as free individuals – a view inherited from Christian theology as much as from the modern Enlightenment – and come to terms with what humans really are: hackable animals. We need to know ourselves better.

Of course, this is hardly new advice. From ancient times, sages and saints repeatedly advised people to “know thyself”. Yet in the days of Socrates, the Buddha and Confucius, you didn’t have real competition. If you neglected to know yourself, you were still a black box to the rest of humanity. In contrast, you now have competition. As you read these lines, governments and corporations are striving to hack you. If they get to know you better than you know yourself, they can then sell you anything they want– be it a product or a politician.

It is particularly important to get to know your weaknesses. They are the main tools of those who try to hack you. Computers are hacked through pre-existing faulty code lines. Humans are hacked through pre-existing fears, hatreds, biases and cravings. Hackers cannot create fear or hatred out of nothing. But when they discover what people already fear and hate it is easy to push the relevant emotional buttons and provoke even greater fury.

If people cannot get to know themselves by their own efforts, perhaps the same technology the hackers use can be turned around and serve to protect us. Just as your computer has an antivirus program that screens for malware, maybe we need an antivirus for the brain. Your AI sidekick will learn by experience that you have a particular weakness – whether for funny cat videos or for infuriating Trump stories – and would block them on your behalf.

You feel that you clicked on these headlines out of your free will, but in fact you have been hacked. Photograph: Getty images

But all this is really just a side issue. If humans are hackable animals, and if our choices and opinions don’t reflect our free will, what should the point of politics be? For 300 years, liberal ideals inspired a political project that aimed to give as many individuals as possible the ability to pursue their dreams and fulfil their desires. We are now closer than ever to realising this aim – but we are also closer than ever to realising that this has all been based on an illusion. The very same technologies that we have invented to help individuals pursue their dreams also make it possible to re-engineer those dreams. So how can I trust any of my dreams?

From one perspective, this discovery gives humans an entirely new kind of freedom. Previously, we identified very strongly with our desires, and sought the freedom to realise them. Whenever any thought appeared in the mind, we rushed to do its bidding. We spent our days running around like crazy, carried by a furious rollercoaster of thoughts, feelings and desires, which we mistakenly believed represented our free will. What happens if we stop identifying with this rollercoaster? What happens when we carefully observe the next thought that pops up in our mind and ask: “Where did that come from?”

For starters, realising that our thoughts and desires don’t reflect our free will can help us become less obsessive about them. If I see myself as an entirely free agent, choosing my desires in complete independence from the world, it creates a barrier between me and all other entities. I don’t really need any of those other entities – I am independent. It simultaneously bestows enormous importance on my every whim – after all, I chose this particular desire out of all possible desires in the universe. Once we give so much importance to our desires, we naturally try to control and shape the whole world according to them. We wage wars, cut down forests and unbalance the entire ecosystem in pursuit of our whims. But if we understood that our desires are not the outcome of free choice, we would hopefully be less preoccupied with them, and would also feel more connected to the rest of the world.

People sometimes imagine that if we renounce our belief in “free will”, we will become completely apathetic, and just curl up in some corner and starve to death. In fact, renouncing this illusion can have two opposite effects: first, it can create a far stronger link with the rest of the world, and make you more attentive to your environment and to the needs and wishes of others. It is like when you have a conversation with someone. If you focus on what you want to say, you hardly really listen. You just wait for the opportunity to give the other person a piece of your mind. But when you put your own thoughts aside, you can suddenly hear other people.

Second, renouncing the myth of free will can kindle a profound curiosity. If you strongly identify with the thoughts and desires that emerge in your mind, you don’t need to make much effort to get to know yourself. You think you already know exactly who you are. But once you realise “Hi, this isn’t me. This is just some changing biochemical phenomenon!” then you also realise you have no idea who – or what – you actually are. This can be the beginning of the most exciting journey of discovery any human can undertake.

There is nothing new about doubting free will or about exploring the true nature of humanity. We humans have had this discussion a thousand times before. But we never had the technology before. And the technology changes everything. Ancient problems of philosophy are now becoming practical problems of engineering and politics. And while philosophers are very patient people – they can argue about something inconclusively for 3,000 years – engineers are far less patient. Politicians are the least patient of all.

How does liberal democracy function in an era when governments and corporations can hack humans? What’s left of the beliefs that “the voter knows best” and “the customer is always right”? How do you live when you realise that you are a hackable animal, that your heart might be a government agent, that your amygdala might be working for Putin, and that the next thought that emerges in your mind might well be the result of some algorithm that knows you better than you know yourself? These are the most interesting questions humanity now faces.

Unfortunately, these are not the questions most humans ask. Instead of exploring what awaits us beyond the illusion of “free will”, people all over the world are now retreating to find shelter with even older illusions. Instead of confronting the challenge of AI and bioengineering, many are turning to religious and nationalist fantasies that are even less in touch with the scientific realities of our time than liberalism. Instead of fresh political models, what’s on offer are repackaged leftovers from the 20th century or even the middle ages.

When you try to engage with these nostalgic fantasies, you find yourself debating such thingsas the veracity of the Bible and the sanctity of the nation (especially if you happen, like me, to live in a place like Israel). As a scholar, this is a disappointment. Arguing about the Bible was hot stuff in the age of Voltaire, and debating the merits of nationalism was cutting-edge philosophy a century ago – but in 2018 it seems a terrible waste of time. AI and bioengineering are about to change the course of evolution itself, and we have just a few decades to figure out what to do with them. I don’t know where the answers will come from, but they are definitely not coming from a collection of stories written thousands of years ago.

So what to do? We need to fight on two fronts simultaneously. We should defend liberal democracy, not only because it has proved to be a more benign form of government than any of its alternatives, but also because it places the fewest limitations on debating the future of humanity. At the same time, we need to question the traditional assumptions of liberalism, and develop a new political project that is better in line with the scientific realities and technological powers of the 21st century.

Greek mythology tells that Zeus and Poseidon, two of the greatest gods, competed for the hand of the goddess Thetis. But when they heard the prophecy that Thetis would bear a son more powerful than his father, both withdrew in alarm. Since gods plan on sticking around for ever, they don’t want a more powerful offspring to compete with them. So Thetis married a mortal, King Peleus, and gave birth to Achilles. Mortals do like their children to outshine them. This myth might teach us something important. Autocrats who plan to rule in perpetuity don’t like to encourage the birth of ideas that might displace them. But liberal democracies inspire the creation of new visions, even at the price of questioning their own foundations

Yuval Noah Harari’s 21 Lessons for the 21st Century is published by Cape. To order a copy for £13.99, saving £5, go to guardianbookshop.com or call 0330 333 6846. Free UK p&p over £10, online orders only. Phone orders min. p&p of £1.99.

Cttrie – Compile time trie based C++ switch for strings

$
0
0

Tobias Hoffmann

C++ User Treffen Aachen, 2018-09-13


C/C++: switch for non-integers - Stack Overflow

switch statement - cppreference.com

fastmatch.h

switch.hpp

Can we do better?

We want to:

  • compare only the remainder
  • get rid of the sorting requirement
  • keep "O(log n)" lookup complexity
  • still have clean code

Trie


  1. Raben
  2. Rabe
  3. Rasten
  4. Rasen

Trie


  1. Raben
  2. Rabe
  3. Rasten
  4. Rasen
smilingthax/cttrie

cttrie usage example i


#include "cttrie.h"
...
  const char *str = ...; // oder std::string, ...

  TRIE(str) printf("E\n");
  CASE("Raben") printf("0\n");
  CASE("Rabe") printf("1\n");
  CASE("Rasten") printf("2\n");
  CASE("Rasen") printf("3\n");
  ENDTRIE;
  

cttrie usage example ii


  printf("%d\n",
         TRIE(str) return -1;
         CASE("abc") return 0;
         CASE("bcd") return 1;
         ENDTRIE);

Agenda

  • Lifting the Hood
  • C++ template techniques, index sequences
  • Trie as C++ types
  • Trie lookup
  • String literals and TMP
  • Building the trie
  • Additional features
  • Two applications
  • Extensions to cttrie, other approaches

Lifting the Hood i


#define TRIE(str)  CtTrie::doTrie((str), [&]{

#define CASE(str)  }, CSTR(str), [&]{

#define ENDTRIE    })

template <typename ArgE, typename... Args>
constexpr auto doTrie(stringview str,
                      ArgE&& argE, Args&&... args)
  -> decltype(argE())
{ ... }

// CSTR("abc")  ->  string_t<.../>
ctrie.h

Lifting the Hood ii

struct stringview {
  template <unsigned int N>
  constexpr stringview(const char (&ar)[N]) // implicit
    // strips trailing \0
    : begin(ar), size((ar[N-1]==0) ? N-1 : N) {}

  template <typename String,
            typename Sfinae=decltype(
              std::declval<String>().c_str(),
              std::declval<String>().size())>
  constexpr stringview(String&& str)
    : begin(str.c_str()), size(str.size()) {}

  stringview(const char *begin)
    : begin(begin), size(std::strlen(begin)) {}

  constexpr stringview(const char *begin, unsigned int size)
    : begin(begin), size(size) {}

  constexpr bool empty() const {
    return (size==0);
  }

  constexpr char operator*() const {
    // assert(!empty());  // or: throw ?
    return *begin;
  }

  constexpr stringview substr(unsigned int start) const {
    return { begin+start,
             (start<size) ? size-start : 0 };
  }

  constexpr stringview substr(unsigned int start,
                              unsigned int len) const {
    return { begin+start,
             (start<size) ?
               (len<size-start) ? len : size-start
             : 0 };
  }

private:
  const char *begin;
  unsigned int size;
};
  
stringview.h

C++ template techniques

// provides  pack_tools::get_index<I>(Ts&&... ts)
// (≙ std::get<I>(std::make_tuple(ts...)) )

namespace pack_tools {
namespace detail {
  template <unsigned int> struct int_c {};

  template <unsigned int I>
  constexpr void *get_index_impl(int_c<I>) // invalid index
  {
    return {};
  }

  template <typename T0, typename... Ts>
  constexpr T0&& get_index_impl(int_c<0>,
                                T0&& t0, Ts&&... ts)
  {
    return (T0&&)t0;
  }

  template <unsigned int I, typename T0, typename... Ts>
  constexpr auto get_index_impl(int_c<I>,
                                T0&& t0, Ts&&... ts)
    -> decltype(get_index_impl(int_c<I-1>(), (Ts&&)ts...))
  {
    return get_index_impl(int_c<I-1>(), (Ts&&)ts...);
  }
} // namespace detail

template <unsigned int I, typename... Ts>
constexpr auto get_index(Ts&&... ts)
  -> decltype(detail::get_index_impl(detail::int_c<I>(),
                                     (Ts&&)ts...))
{
  static_assert((I<sizeof...(Ts)), "Invalid Index");
  return detail::get_index_impl(detail::int_c<I>(),
                                (Ts&&)ts...);
}

} // namespace pack_tools
  
get_index.h

Index sequences i

// using seq3_t = std::make_index_sequence<3>; // not c++11
using seq3_t = decltype(detail::make_index_sequence<3>());

// => seq3_t = detail::index_sequence<0, 1, 2, 3>;

template <unsigned int... Is>
void foo(detail::index_sequence<Is...>) { ... }

foo(detail::make_index_sequence());

// c++14: index_sequence = integer_sequence<size_t, Is...>;

Index sequences ii

struct nil {};

template <bool B>
using Sfinae = typename std::enable_if<B>::type;

template <unsigned int... Is>
struct index_sequence {};

template <unsigned int N, unsigned int... Is,
          typename =Sfinae<N==0>>
constexpr index_sequence<Is...> make_index_sequence(...)
{ return {}; }

template <unsigned int N, unsigned int... Is,
          typename =Sfinae<(N>0)>>
constexpr auto make_index_sequence(...)
  // argument forces ADL
  -> decltype(make_index_sequence<N-1, N-1, Is...>(nil()))
{ return {}; }

Index sequences iii

namespace detail {
  template <unsigned int... Is,
            typename ArgE, typename... Args>
  constexpr auto doTrie(index_sequence<Is...>,stringview str,ArgE&& argE, Args&&... args)
    -> decltype(argE())
  {
    return checkTrie(
      makeTrie<0>(
        nil(),
        pack_tools::get_index<(2*Is)>((Args&&)args...)...),str, (ArgE&&)argE,
      pack_tools::get_index<(2*Is+1)>((Args&&)args...)...);
  }
} // namespace detail

template <typename ArgE, typename... Args>
constexpr auto doTrie(stringview str,ArgE&& argE, Args&&... args)
  -> decltype(argE())
{
  return detail::doTrie(
    detail::make_index_sequence<sizeof...(args)/2>(),str, (ArgE&&)argE, (Args&&)args...);
}

Trie as C++ types

namespace CtTrie {
using pack_tools::detail::int_c;

template <int Char, typename Next>
struct Transition {};

// multiple inheritance used for cttrie_sw256 ...
template <typename... Transitions>
struct TrieNode : Transitions... {};

// ...

Trie lookup i

check(node, str):
  if (str.empty):
    if (node.Transition[0].Char==-1):
      return node.Transition[0].Next // i.e. index
    return error

  switch (str[0]):
    case node.Transition[0].Char:
      return check(node.Transition[0].Next, str[1:])
    case node.Transition[1].Char:
      return check(node.Transition[1].Next, str[1:])
    ...
  return error // (default)
(pseudocode)

Trie lookup ii

// possible via Transition<-1, int_c<...>>
template <typename FnE, typename... Fns>
constexpr auto checkTrie(TrieNode<> trie, stringview str,FnE&& fne, Fns&&... fns)
  -> decltype(fne())
{
  return fne();
}

template <int Char, typename Next,typename FnE, typename... Fns,
          typename =Sfinae<(Char>=0)>>
constexpr auto checkTrie(
  TrieNode<Transition<Char,Next>> trie,
  stringview str, FnE&& fne, Fns&&... fns)
  -> decltype(fne())
{
  return (!str.empty() && (*str==Char))
    ? checkTrie(Next(), str.substr(1),(FnE&&)fne, (Fns&&)fns...)
    : fne();
}

template <typename... Transitions,typename FnE, typename... Fns>
constexpr auto checkTrie(
  TrieNode<Transitions...> trie,
  stringview str, FnE&& fne, Fns&&... fns)
  -> decltype(fne())
{
  return (!str.empty())
    ? Switch(*str, str.substr(1),
             trie, (FnE&&)fne, (Fns&&)fns...)
    : fne();
}

template <unsigned int Index, typename... Transitions,typename FnE, typename... Fns>
constexpr auto checkTrie(
  TrieNode<Transition<-1,int_c<Index>>, Transitions...>,
  stringview str, FnE&& fne, Fns&&... fns)
  -> decltype(fne())
{
  return (str.empty())
    ? pack_tools::get_index<Index>((Fns&&)fns...)()
    : checkTrie(TrieNode<Transitions...>(), str,(FnE&&)fne, (Fns&&)fns...);
}

Trie lookup: Switch i

template <...>
auto Switch(unsigned char ch, stringview str,
            TrieNode<Transitions...>, FnE&&, Fns&&...)
  -> decltype(fne())
{
  switch (ch) {
    {
    case (Transitions::Char):
      return checkTrie(Transitions::Next(), str,
                       (FnE&&)fne, (Fns&&)fns...);
    }...
  }
  return fne();
}

Trie lookup: Switch ii

template <int Char0, typename Next0,
          int Char1, typename Next1,
          typename FnE,typename... Fns>
auto Switch(unsigned char ch, stringview str,
            TrieNode<Transition<Char0,Next0>,
                     Transition<Char1,Next1>>,
            FnE&& fne, Fns&&... fns)
  -> decltype(fne())
{
  switch (ch) {
  case Char0: return checkTrie(Next0(), str, (FnE&&)fne, (Fns&&)fns...);
  case Char1: return checkTrie(Next1(), str, (FnE&&)fne, (Fns&&)fns...);
  }
  return fne();
}

Trie lookup: Switch iii

// TNext obtained by partial specialization!
next_or_nil<I>(node) =
   has_base(node, Transition<I, TNext>) ? TNext : nil

type table[256] = { next_or_nil<Is>(node)... };
// actually: type_array<A00,A01,...> parameter

switch (str[0]):
  case 0: static_if (is_nil(table[0])): return error;
    return check(table[0], str[1:])
  case 1: static_if (is_nil(table[1])): return error;
    return check(table[1], str[1:])
  ...
  case 255:
    return check(table[255], str[1:])
  

String literals and TMP

Problem: foo as template parameter?!

Idea: "abc"[1] == 'b' is possible

template <unsigned char... Chars>
struct string_t {
  static constexpr unsigned int size() {
    return sizeof...(Chars);
  }
  static const char *data() {
    static constexpr const char data[]={Chars...};
    return data;
  }
};

namespace detail {
template <typename Str, unsigned int N, unsigned char... Chars>
struct make_string_t
  : make_string_t<Str, N-1, Str().chars[N-1], Chars...> {};

template <typename Str, unsigned char... Chars>
struct make_string_t<Str, 0, Chars...> {
   typedef string_t<Chars...> type;
 };
} // namespace detail

#define CSTR(str) []{ \
    struct Str { const char *chars = str; }; \
    return ::detail::make_string_t<Str,sizeof(str)>::type(); \
  }()
cstr.h

Building the trie i

makeTrie(String0, String1, ..., StringN):
  for each I=0...N:
    trie = trieAdd<I, StringI>(trie)
template <unsigned int I>
constexpr TrieNode<> makeTrie(nil) // nil forces adl
{ return {}; }

template <unsigned int I,typename String0, typename... Strings>
constexpr auto makeTrie(nil, String0, Strings...)
  -> decltype(
    trieAdd<I, String0>(makeTrie<I+1>(nil(), Strings()...)
    ))
{ return {}; }

Building the trie ii

trieAdd<Index, String>(TrieNode<Transitions...>):
  insertSorted<Index>(String, TrieNode< | Transitions...>)
insertSorted:
  • Either there is no transition yet for the next char:
    Insert new Transition into TrieNode at appropriate position.
  • Or, when there is one:
    Take transition, repeat.
  • Start of iteration is (TrieNode(), Transitions...).

Building the trie iii

trieAdd<Index, String>(TrieNode<Transitions...>):
  insertSorted<Index>(String, TrieNode< | Transitions...>)
template <unsigned int Index, typename String,
             typename... Transitions>
constexpr auto trieAdd(TrieNode<Transitions...>)
  -> decltype(
    insertSorted<Index>(
      nil(), String(), // nil forces adl
      TrieNode<>(), Transitions()...))
{ return {}; }

Building the trie iv: Chains

transitionAdd<Index>(string_t<...>) →
  (string_t<Ch0, Chars...>)
    = Transition<Ch0,
                 transitionAdd<Index>(string_t<Chars...>)>

  (string_t<>)
    = Transition<-1, int_c<Index>>

  (string_t<'\0'>)  // alternative ...
    = Transition<-1, int_c<Index>>

Building the trie v: Chains

template <unsigned int Index>
constexpr Transition<-1, int_c<Index>>transitionAddclass="c1">(nil, string_t<0>)  //  or: string_t<>
{ return {}; }

template <unsigned int Index,unsigned char Ch0, unsigned char... Chars>
constexpr Transition<Ch0, TrieNode<decltype(transitionAdd<Index>(nil(), string_t<Chars...>())
)>>transitionAdd(nil, string_t<Ch0, Chars...>)
{ return {}; }

Building the trie vi

insertSorted<Index>(
  string_t<Ch0, Chars...> s,
  TrieNode<Prefix... | Transition<Ch,Next>, Transitions...>
):

  if (Ch>Ch0):
    TrieNode<Prefix..., transitionAdd<Index>(s),
             Transition<Ch,Next>, Transitions...>

  else if (Ch==Ch0):
    TrieNode<Prefix...,
      Transition<Ch,
        trieAdd<Index, string_t<Chars...>>(Next())>,
      Transitions...>

  else // (Ch<Ch0)
    insertSorted<Index>(s,
      TrieNode<Prefix...,
               Transition<Ch, Next> | Transition...>)

Building the trie vii

template <unsigned int Index,
          unsigned char... Chars,
          typename... Prefix, typename... Transitions,
          typename =Sfinae<(sizeof...(Chars)==0 ||sizeof...(Transitions)==0)>>
constexpr auto insertSorted(nil,
  string_t<Chars...> s,
  TrieNode<Prefix...>, Transitions...)
  -> TrieNode<Prefix...,
    decltype(transitionAdd<Index>(nil(), s)),
    Transitions...>
{ return {}; }

template <unsigned int Index,
          unsigned char Ch0, unsigned char... Chars,
          typename... Prefix,
          int Ch, typename Next,
          typename... Transitions,
          typename =Sfinae<(Ch>Ch0)>>
constexpr auto insertSorted(nil,
  string_t<Ch0, Chars...> s,
  TrieNode<Prefix...>,
  Transition<Ch,Next>,
  Transitions...)
  -> TrieNode<Prefix...,
    decltype(transitionAdd<Index>(nil(), s)),
    Transition<Ch,Next>,
    Transitions...>
{ return {}; }

template <unsigned int Index,
          unsigned char Ch0, unsigned char... Chars,
          typename... Prefix,
          int Ch, typename Next,
          typename... Transitions,
          typename =Sfinae<(Ch==Ch0)>>
constexpr auto insertSorted(nil,
  string_t<Ch0, Chars...> s,
  TrieNode<Prefix...>,
  Transition<Ch, Next>,
  Transitions...)
  -> TrieNode<
    Prefix...,
    Transition<Ch,
      decltype(trieAdd<Index, string_t<Chars...>>(Next()))>,
    Transitions...>
{ return {}; }

template <unsigned int Index,
          unsigned char Ch0, unsigned char... Chars,
          typename... Prefix,
          int Ch, typename Next,
          typename... Transitions,
          typename =Sfinae<(Ch<Ch0)>>
constexpr auto insertSorted(nil,
  string_t<Ch0, Chars...> s,
  TrieNode<Prefix...>,
  Transition<Ch, Next>,
  Transitions...)
  -> decltype(insertSorted<Index>(nil(), s,
    TrieNode<Prefix..., Transition<Ch, Next>>(),
    Transitions()...))
{ return {}; }

Additional features

template <typename TrieNode, typename FnE, typename... Fns>
constexpr auto checkTrie(TrieNode trie, stringview str,
                         FnE&& fne,Fns&&... fns)
  -> decltype(fne())
{
  return detail::checkTrie(trie, str,
                           (FnE&&)fne, (Fns&&)fns...);
}

// Strings must be string_t
template <typename... Strings>
constexpr auto CtTrie::makeTrie(Strings... strs)
  -> decltype(detail::makeTrie<0>(detail::nil(), strs...))
{ return {}; }

// ---

auto trie=CtTrie::makeTrie(
  CSTR("Rosten"),
  CSTR("Raben"));

// CtTrie::checkTrie(trie, "ab", [&]{...}, [&]{...}, ...);

#include "cttrie-print.h"
CtTrie::printTrie(trie); // or: decltype(trie)() ...
  

Application: XML

  for (node=node->children; node; node=node->next) {
    if (node->type != XML_ELEMENT_NODE) {
      continue;
    }
    TRIE((const char *)node->name)
      fprintf(stderr, "Warning: unknown ltconfig/text element: %s\n", (const char *)node->name);

    CASE("in")
      ensure_onlyattr(node, "!rel at");
      unique_xmlFree rel(xmlGetProp(node, (const xmlChar *)"rel"));
      txt.in.rel_loop =
        TRIE((const char *)rel)
          throw UsrError("Unknown text/in/@rel value: %s\n", (const char *)rel);
          return bool(); // needed for return type deduction
        CASE("in") return false;
        CASE("loop") return true;
        ENDTRIE;
      txt.in.at = get_attr_int(node, "at", 0);
      parse_fade_only(node, txt.in.fade_duration);

    CASE("out")
      ensure_onlyattr(node, "at");
      txt.out.at = get_attr_int(node, "at", 0);
      parse_fade_only(node, txt.out.fade_duration);
    ENDTRIE;
  }
  

Extensions to cttrie

  • Partial/substring matching
  • Case insensitive
  • Suffix-at-once

Other approaches

Skyrealms of Jorune

$
0
0

skyrealmsofjorune.com

QUICK LINKS

Over 800 Pages of Freebies

Sholari Magazine

Gomo Guides

Segment: Sho-Caudal

HELPFUL FREEBIES

Intro - Sholari James

Intro - Copra Joe

Intro - Atilla

FromSSC #6: A way to get your current near future sci-fi game players to Jorune


What Has Gone Before

The Skyrealms of Jorune was the result of a high school English class assignment, developed by Andrew Leker and artist Miles Teves into their first book. Their first edition was on par with a whole breed of "kitchen table" published game, released by small groups of people who had fallen in love with the new passtime of role playing games.

It was ground breaking in concept and art.

2ND EDITION
                  BOXED SET COVER
Andrew's sister Amy joined the team to create another high water mark in gaming with the boxed Second Edition Skyrealms of Jorune release, followed quickly by supplements; Companion: Ardoth, Companion: Burdoth, Companion: Earth Tec, and special releases at the annual gaming industry (GenCon), The Iscin Races and Shanthas of Jorune. In adition to beautifully designed box and books, the Second edition welcomed new talents tot he development, including future editor David Ackerman and artist Alan Okimoto.
At the same time White Wolf magazine showed support with a regular column written. by the game creators and new authors. The first of the Jorune fanzines appeared in England - Sarceen's Knowledge.

Second Edition remains the favorite version of the game with most Jorunis.

A third edition was published by Chessex, which is best known for its selection of gaming dice, battle mats, book covers, and other game support products. Several supplements were added to the Jorune mythos but the timing was not right. But it resulted in more fanzines entering the arena: Sholari, Borkelby's Folly, Danstead Traveller, and Annals of the Tan Soor Hisotrical Society. And the growth of the internet gave Jorunis the incredible development essays Jorune According to Sholari James.

Jorune was the last hurrah of "side screen" computer games with Alien Logic– released at the same time Doom and other "first person shooters" took over the gaming world.

The official future of Jorune remains in question, but between what has come before and what is yet to come, it is a great time for newcomers to discover the world with a stream of new material and fresh availability with the Jorune that was.

We offer over 800 pages of free Jorune material through archive.org (search SOJ RPG), or our downloads page.

Welcome to Return to Jorune, and project of revived passion for one of the pivotal games of role playing.

Sholari -
            Volume Two - Number One - Now Available

wrap around cover from SHOLARI v2n1 by Steve DevaneyReturn to your favorite game world, still living in the hearts of fans around the world.

A dedicated core of fans have built fresh Jorunigraphica based on the creation of Andrew Leker and Miles Teves. Return to Jorune offers new material with re-presntation of the original fanzines over a quarter of a century from editors Alex Blair, Joseph Kessler Adams (as Joe Coleman), Kym Pagh, and Ray Gilliam, which brought the community of fans new work by author, John Snead, Geoff Gray, Matthew Pook, and others, coupled with artwork from originator Miles Teves, Steve Devaney, Robert Smith, Marc Debidour, Chris Lackey, Dominic Green, and others.

Here is what we have so far:

Thoneport CoverGOMO GUIDE: THONEPORT, a new release of the original guide book with new art by Robert D. Smith giving details on the only district in all of Thantier where non-humans, mutants, genetically engineered races (the "Thone", the worst of Thantierian insults) are allowed to walk free without a human handler. Map, locations, and warning for the tourists, with history on the nation and culture who defend all things human on modern Jorune. 67 Pages, PDF and print available through DriveThruRPG.Com.
PDF Front Cover - Sholari Magazine Volume
                          2 Number 1The return of SHOLARI MAGAZINE. Volume 2, Number 1. An experimental querrid's report on a mysterious site that has been cursed land for over 4,000 years. No one who has ventured there has returned to tell the tale. IN THIS ISSUE: "The Scars of Far Temantro"; re-predentation to a module form Sholari Magazine, Volume 1, Number 2, with more detail, "The Somar"; ten question on Jorune in "Why Jorune?"; and an editorial essays, this one under the title "A World of One's Own." And More. Cover by Steve Devaney and interior art by Marc Debidour.

Available in Print or PDF through DriveThruRPG.com, 67 pages

Segment
                      Sho-Caudal #6 coverAn unexpected project born of frustration with the Print-on-Demand system, SEGMENT: SHO-CAUDAL is homage to the column SEGMENT: JORUNE from the original White Wolf Magazine. From the notebooks of Copra Joe, SSC guarantee at least 16 pages per issue, at leastSx one every month. Six issues published so far and two free samplers (from #1 and #4). Secrets, Maps, missteps, previews of things we plan to publish as part of Return to Jorune, story structures for writers of Jorune, Keeper Rods, timelines, preview of a novel (maybe), and maybe a few secrets that will never see revelation anywhere else.

Segment: Sho-Caudal is PDF subscription only. Think "newsletter." [Details]



Segment Sho-Caudal #1 coverSegment
                      Sho-Caudal #2 coverSegment
                          Sho-Caudal #3 cover  Segment
                          Sho-Caudal #4 coverSegment Sho-Caudal
                      #5 cover Segment Sho-Caudal #1 cover  Segment Sho-Caudal #7 cover  Segment
                      Sho-Caudal #8 cover

Segment
                      Sho-Caudal #9 


More covers here as more are published.
The Plan
A full gaming system based on 3rd Edition Skyrealms of Jorune RPG and requiring 1d20 for very fast, dangerous, game play. New GOMO GUIDES, the first being for Tan Iricid, the Thriddle domain. More issues of SHOLARI, with resource and adventures around the planet. New edited collections from the original fanzines DANSTEAD TRAVELLER and BORKELBY'S FOLLYand a steady stream of SEGMENT: SHO-CAUDAL in your email every month.



Skyrealms of Jorune(tm) was a traditional "pencil, paper, and dice" role playing game. It remains the property of Skyrealms, Inc. RETURN TO JORUNE is a fan based continuation of the excitemetn felte when we first discovered the game and the world.

Nothing here is official. Gamemasters will find copies of long out-of-print source materials, decades of development by passionate fans who have kept the flame alive, and new material to keep the world fresh and new material to keep the discoveries fresh.

If you are new - welcome to Jorune, the world of seven moons and isho. If you a returning Joruni --

Welcome home.
Races Compared (not
                all)

What Does Quantum Theory Actually Tell Us about Reality?

$
0
0

For a demonstration that overturned the great Isaac Newton’s ideas about the nature of light, it was staggeringly simple. It “may be repeated with great ease, wherever the sun shines,” the English physicist Thomas Young told the members of the Royal Society in London in November 1803, describing what is now known as a double-slit experiment, and Young wasn’t being overly melodramatic. He had come up with an elegant and decidedly homespun experiment to show light’s wavelike nature, and in doing so refuted Newton’s theory that light is made of corpuscles, or particles.

But the birth of quantum physics in the early 1900s made it clear that light is made of tiny, indivisible units, or quanta, of energy, which we call photons. Young’s experiment, when done with single photons or even single particles of matter, such as electrons and neutrons, is a conundrum to behold, raising fundamental questions about the very nature of reality. Some have even used it to argue that the quantum world is influenced by human consciousness, giving our minds an agency and a place in the ontology of the universe. But does the simple experiment really make such a case?

In the modern quantum form, Young’s experiment involves beaming individual particles of light or matter at two slits or openings cut into an otherwise opaque barrier. On the other side of the barrier is a screen that records the arrival of the particles (say, a photographic plate in the case of photons). Common sense leads us to expect that photons should go through one slit or the other and pile up behind each slit. 

They don’t. Rather, they go to certain parts of the screen and avoid others, creating alternating bands of light and dark. These so-called interference fringes, the kind you get when two sets of waves overlap. When the crests of one wave line up with the crests of another, you get constructive interference (bright bands), and when the crests align with troughs you get destructive interference (darkness).

But there’s only one photon going through the apparatus at any one time. It’s as ifeach photon is going through both slits at once and interfering with itself. This doesn’t make classical sense.

Mathematically speaking, however, what goes through both slits is not a physical particle or a physical wave but something called a wave function—an abstract mathematical function that represents the photon’s state (in this case its position). The wave function behaves like a wave. It hits the two slits, and new waves emanate from each slit on the other side, spread and eventually interfere with each other. The combined wave function can be used to work out the probabilities of where one might find the photon.

The photon has a high probability of being found where the two wave functions constructively interfere and is unlikely to be found in regions of destructive interference. The measurement—in this case the interaction of the wave function with the photographic plate—is said to “collapse” the wave function. It goes from being spread out before measurement to peaking at one of those places where the photon materializes upon measurement. 

This apparent measurement-induced collapse of the wave function is the source of many conceptual difficulties in quantum mechanics. Before the collapse, there’s no way to tell with certainty where the photon will land; it can appear at any one of the places of non-zero probability. There’s no way to chart the photon’s trajectory from the source to the detector. The photon is not real in the sense that a plane flying from San Francisco to New York is real.

Werner Heisenberg, among others, interpreted the mathematics to mean that reality doesn’t exist until observed. “The idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist, independently of whether or not we observe them ... is impossible,” he wrote. John Wheeler, too, used a variant of the double-slit experiment to argue that “no elementary quantum phenomenon is a phenomenon until it is a registered (‘observed,’ ‘indelibly recorded’) phenomenon.”

But quantum theory is entirely unclear about what constitutes a “measurement.” It simply postulates that the measuring device must be classical, without defining where such a boundary between the classical and quantum lies, thus leaving the door open for those who think that human consciousness needs to be invoked for collapse. Last May, Henry Stapp and colleagues argued, in this forum, that the double-slit experiment and its modern variants provide evidence that “a conscious observer may be indispensable” to make sense of the quantum realm and that a transpersonal mind underlies the material world.

But these experiments don’t constitute empirical evidence for such claims. In the double-slit experiment done with single photons, all one can do is verify the probabilistic predictions of the mathematics. If the probabilities are borne out over the course of sending tens of thousands of identical photons through the double slit, the theory claims that each photon’s wave function collapsed—thanks to an ill-defined process called measurement. That’s all.

Also, there are other ways of interpreting the double-slit experiment. Take the de Broglie-Bohm theory, which says that reality is both wave and particle. A photon heads towards the double slit with a definite position at all times and goes through one slit or the other; so each photon has a trajectory. It’s riding a pilot wave, which goes through both slits, interferes and then guides the photon to a location of constructive interference.

In 1979, Chris Dewdney and colleagues at Birkbeck College, London, simulated the theory’s prediction for the trajectories of particles going through the double slit. In the past decade, experimentalists have verified that such trajectories exist, albeit by using a controversial technique called weak measurements. The controversy notwithstanding, the experiments show that the de Broglie-Bohm theory remains in the running as an explanation for the behavior of the quantum world.

Crucially, the theory does not need observers or measurements or a non-material consciousness.

Neither do so-called collapse theories, which argue that wavefunctions collapse randomly: the more the number of particles in the quantum system, the more likely the collapse. Observers merely discover the outcome. Markus Arndt’s team at the University of Vienna in Austria has been testing these theories by sending larger and larger molecules through the double slit. Collapse theories predict that when particles of matter become more massive than some threshold, they cannot remain in a quantum superposition of going through both slits at once, and this will destroy the interference pattern. Arndt’s team has sent a molecule with more than 800 atoms through the double slit, and they still see interference. The search for the threshold continues. 

Roger Penrose has his own version of a collapse theory, in which the more massive the mass of the object in superposition, the faster it’ll collapse to one state or the other, because of gravitational instabilities. Again, it’s an observer-independent theory. No consciousness needed. Dirk Bouwmeester at the University of California, Santa Barbara, is testing Penrose’s idea with a version of the double-slit experiment. 

Conceptually, the idea is to not just put a photon into a superposition of going through two slits at once, but to also put one of the slits in a superposition of being in two locations at once. According to Penrose, the displaced slit will either stay in superposition or collapse while the photon is in flight, leading to different types of interference patterns. The collapse will depend on the mass of the slits. Bouwmeester has been at work on this experiment for a decade and may soon be able to verify or refute Penrose’s claims.

If nothing else, these experiments are showing that we cannot yet make any claims about the nature of reality, even if the claims are well-motivated mathematically or philosophically. And given that neuroscientists and philosophers of mind don’t agree on the nature of consciousness, claims that it collapses wave functions are premature at best and misleading and wrong at worst.

Viewing all 25817 articles
Browse latest View live