Treating timestamps as instants or intervals

455 views
Skip to first unread message

Aaron Gable

unread,
Jun 9, 2021, 8:28:33 PM6/9/21
to dev-secur...@mozilla.org
Suppose a certificate has a notAfter field with the value Aug 5 19:42:04 2021 GMT. Suppose further that the current time is Aug 5 19:42:04.005 2021 GMT, five milliseconds after the beginning of the :04 second. Is that certificate valid, or expired?

The BRs, directly incorporating language from RFC 5280, define the "validity period" of a certificate to be: "the period of time from notBefore through notAfter, inclusive".

It is thus clear that checking the validity of a certificate when the current time is exactly equal to its notBefore or notAfter should conclude that the certificate is valid. If the current time is Aug 5 19:42:04.000000 2021 GMT, then the hypothetical certificate above has not yet expired.

But it is not immediately clear what should happen in the first hypothetical case, where some fraction of a second has elapsed since the beginning of the second indicated by the notAfter timestamp. If the current time is Aug 5 19:42:04.005 2021 GMT, then:

In Chrome, the current time is compared directly to the value of the notAfter timestamp, treating notAfter as an instant. The answer would be "not valid".

In NSS, the same procedure appears to be applied (although I am less personally familiar with this codebase). The answer would be "not valid".

In Mozilla::PKIX, the current time is compared directly to the value of the notAfter timestamp, but the current time is also truncated to the granularity of one second (the `mozilla::pkix::Time` class is only capable of one-second resolution). The answer would be "valid".

I have yet to find a single implementation which computes a "validity period" by subtracting the notBefore timestamp from the notAfter timestamp and then adding one additional second to include the entirety of the notAfter's second.

Therefore, I'd like to suggest that the baseline requirements incorporate language to the effect that all timestamps are interpreted to be instantaneous, rather than representing the whole interval which they could cover. I'm curious as to the thoughts and opinions from the community before I actually move forward with such a proposal.

Thanks,
Aaron

Ryan Sleevi

unread,
Jun 9, 2021, 8:57:40 PM6/9/21
to Aaron Gable, dev-secur...@mozilla.org
On Wed, Jun 9, 2021 at 8:28 PM Aaron Gable <aa...@letsencrypt.org> wrote:
In Chrome, the current time is compared directly to the value of the notAfter timestamp, treating notAfter as an instant. The answer would be "not valid".

This is the wrong code. That's just for some UI dressing and... is very old code :)


I explained a bit 
 
In Mozilla::PKIX, the current time is compared directly to the value of the notAfter timestamp, but the current time is also truncated to the granularity of one second (the `mozilla::pkix::Time` class is only capable of one-second resolution). The answer would be "valid".

I have yet to find a single implementation which computes a "validity period" by subtracting the notBefore timestamp from the notAfter timestamp and then adding one additional second to include the entirety of the notAfter's second.

ZLint does ;-)


Also, while I'm not terribly fluent in Rust, I believe webpki does as well (which is to say, "answers valid"), which probably is no surprise given Brian Smith's involvement/leadership in mozilla::pkix and now webpki. Specifically, https://github.com/briansmith/webpki/blob/18cda8a5e32dfc2723930018853a984bd634e667/src/time.rs#L40-L42https://github.com/briansmith/webpki/blob/18cda8a5e32dfc2723930018853a984bd634e667/src/time.rs#L68-L72 , and https://github.com/briansmith/webpki/blob/18cda8a5e32dfc2723930018853a984bd634e667/src/verify_cert.rs#L192-L197

Could you mention what (other) implementations you did check? That might help provide a better discussion. Or was it just Chrome and Firefox so far?

Ryan Sleevi

unread,
Jun 9, 2021, 9:00:13 PM6/9/21
to Ryan Sleevi, Aaron Gable, dev-secur...@mozilla.org
On Wed, Jun 9, 2021 at 8:57 PM Ryan Sleevi <ry...@sleevi.com> wrote:


On Wed, Jun 9, 2021 at 8:28 PM Aaron Gable <aa...@letsencrypt.org> wrote:
In Chrome, the current time is compared directly to the value of the notAfter timestamp, treating notAfter as an instant. The answer would be "not valid".

This is the wrong code. That's just for some UI dressing and... is very old code :)


I explained a bit 

Sorry, that cut off mid-send. I explained a bit on the bug this strategy - of comparing dates-as-strings.

The actual date (including the 'floor' function I also mentioned on the incident) comes from https://source.chromium.org/chromium/chromium/src/+/main:net/cert/cert_verify_proc_builtin.cc;l=714-726;drc=c06511a314747fab345af58b8523eec1b36caf05 - that's where the 0.001 is truncated to 0.

Peter Gutmann

unread,
Jun 10, 2021, 12:32:12 AM6/10/21
to Aaron Gable, dev-secur...@mozilla.org
Aaron Gable <aa...@letsencrypt.org> writes:

>But it is not immediately clear what should happen in the first hypothetical
>case, where some fraction of a second has elapsed since the beginning of the
>second indicated by the notAfter timestamp.

This looks like a standard accuracy vs. precision situation. In this case
since the time in the certificate must be truncated to seconds, the time used
for the comparison should also be truncated to seconds.

Peter.

Kurt Roeckx

unread,
Jun 10, 2021, 4:03:18 AM6/10/21
to Ryan Sleevi, Aaron Gable, dev-secur...@mozilla.org
On Wed, Jun 09, 2021 at 08:57:27PM -0400, Ryan Sleevi wrote:
> Could you mention what (other) implementations you did check? That might
> help provide a better discussion. Or was it just Chrome and Firefox so far?

A bug was files last month against OpenSSL that it expires too
soon: https://github.com/openssl/openssl/issues/15124

It includes this statement:
I've tested this also with other libraries, namely Botan, GnuTLS,
and MbedTLS, and they all behave according to the standard.


Kurt

Michel Le Bihan

unread,
Jun 10, 2021, 4:51:02 AM6/10/21
to dev-secur...@mozilla.org, ku...@roeckx.be, aa...@letsencrypt.org, dev-secur...@mozilla.org, Ryan Sleevi
Why is the precision of 1s that important?

Peter Gutmann

unread,
Jun 10, 2021, 6:22:59 AM6/10/21
to Michel Le Bihan, dev-secur...@mozilla.org, ku...@roeckx.be, aa...@letsencrypt.org, Ryan Sleevi
Michel Le Bihan <michel.le...@gmail.com> writes:

>Why is the precision of 1s that important?

Because of things like this:

https://bugzilla.mozilla.org/show_bug.cgi?id=1715455

Let's Encrypt: certificate lifetimes 90 days plus one second

When you're playing compliance bingo, it's very important to pay attention to
every little detail. In particular running around to make sure every mouse
hole in the barn is patched distracts from the fact that one of the barn walls
is missing.

Peter.

Kurt Roeckx

unread,
Jun 10, 2021, 6:56:47 AM6/10/21
to Grzegorz Prusak, Ryan Sleevi, Aaron Gable, dev-secur...@mozilla.org
On Thu, Jun 10, 2021 at 12:17:23PM +0200, Grzegorz Prusak wrote:
> On Thu, Jun 10, 2021, 10:03 AM Kurt Roeckx <ku...@roeckx.be> wrote:
>
> >
> > A bug was files last month against OpenSSL that it expires too
> > soon: https://github.com/openssl/openssl/issues/15124
> >
> > It includes this statement:
> > I've tested this also with other libraries, namely Botan, GnuTLS,
> > and MbedTLS, and they all behave according to the standard.
> >
>
> but it doesn't state why +1s interpretation of the standard is the right
> one in the first place. I may be blind, but afaict neither x509 nor x690
> specifies anywhere that time is a discrete, nowhere dense set of points
> (aka seconds), equipped with a counting measure.

RFC5280 only allows a resolution of seconds.

OpenSSL uses a time_t, which also has a resolution of seconds. I
assume most other implementations also have a resolution of 1
second.

You could go and argue that it's always past the time in time_t.


Kurt

Rob Stradling

unread,
Jun 10, 2021, 7:49:19 AM6/10/21
to Aaron Gable, ry...@sleevi.com, dev-secur...@mozilla.org
> ZLint does ;-)

For reference, here's one certificate that was (mis)issued by another CA for 398 days plus 1 second, along with the corresponding incident report and evidence of timely revocation.  ZLint and Cablint both report an error due to the additional second.  (The CA actually self-discovered this misissuance after watching https://github.com/zmap/zlint/issues/467 unfold; I fixed Cablint as a result of the same discussion).


https://bugzilla.mozilla.org/show_bug.cgi?id=1663080 (IdenTrust Issuance of certificates greater than 398 days)


From: dev-secur...@mozilla.org <dev-secur...@mozilla.org> on behalf of Ryan Sleevi <ry...@sleevi.com>
Sent: 10 June 2021 01:57
To: Aaron Gable <aa...@letsencrypt.org>
Cc: dev-secur...@mozilla.org <dev-secur...@mozilla.org>
Subject: Re: Treating timestamps as instants or intervals
 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.

--
You received this message because you are subscribed to the Google Groups "dev-secur...@mozilla.org" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev-security-po...@mozilla.org.
To view this discussion on the web visit https://groups.google.com/a/mozilla.org/d/msgid/dev-security-policy/CAErg%3DHH4TbJsdyW5jx-0iQ%2Bauy7xPTDxhGwNdCJvNpgTB_B84w%40mail.gmail.com.

Kurt Roeckx

unread,
Jun 10, 2021, 10:30:02 AM6/10/21
to Grzegorz Prusak, Ryan Sleevi, Aaron Gable, dev-secur...@mozilla.org
On Thu, Jun 10, 2021 at 01:23:10PM +0200, Grzegorz Prusak wrote:
> > RFC5280 only allows a resolution of seconds.
> >
> > OpenSSL uses a time_t, which also has a resolution of seconds. I
> > assume most other implementations also have a resolution of 1
> > second.
> >
>
> The fact that a type has some limited granularity, doesn't mean that values
> of that type represent intervals. It just means that you are just unable to
> represent some of the time values in a given representation.

I don't fully understand what you're saying. What I'm saying is
that because of implementation details, only to a resolution of
1 second is checked.

Since both the clock resolution and the resolution in the
certificate is 1 second, you can argue that RFC 5280 says
that if it's equal, it's in the interval.

If you can measure with a higher resolution than 1 second,
and the maximum is 10, is 10.499 seconds still inside the interval
or not? One way of dealing with this is that you convert both
to the same resolution, normally by rounding. So that 10.499 becomes
10, and you can say it's in the interval. But 10.501 would get
rounded to 11, and would not be in the interval.

Or you can look at the 10 as really saying
"10.0000000000000000000000000000..", and 10.499 would not be
in the interval, but as far as I know, that's not a common
interpretation.

We're really talking about 2 related issues: What is the interval
(or period), and when does the certificate expire.

RFC 5280 talks about an interval and period. An interval would be
something like "1 second" or "1 day", but it gets represented by 2
timestamps. The most natural way to calculate the interval from
those 2 timestamps is the difference between them, so 10:00:01 -
10:00:00 would be 1 second. There are people who argue
that the text in RFC 5280 says that that is actually a 2 second
interval, which I don't agree with.

How you calculate the interval from the 2 timestamps is what started
this discussion, because the other documents place limits on the
interval.

The other question is when the certificate are valid. We don't
measure an interval, we measure the current time, and compare
the current time against the timestamps.

Assume we have a certificate that's valid from 10:00:00 to
11:00:00, and you can measure time with a higher resolution than
1 second, when does the validity period start and end?

If you would round the time, instead of truncate it, what for
me would be the normal way to compare the timestamps is that
if the time is 9:59:59.5000000000001 it's valid because that gets
rounded to 10:00:00, and is still valid at 11:00:00.4999999999999,
because that gets rounded to 11:00:00. The validity interval is
1 hour, but because of your higher resolution, you can get an extra
half second on both sides.

If you instead truncate, at 10:00:00.000000000000 it's valid, and
at 11:00:00.9999999999999 it's still valid. The difference between
those 2 timestamps, truncated to 1 second, is still 1 hour.


Kurt

Ryan Sleevi

unread,
Jun 10, 2021, 11:33:19 AM6/10/21
to Kurt Roeckx, Aaron Gable, Grzegorz Prusak, Ryan Sleevi, dev-secur...@mozilla.org
On Thu, Jun 10, 2021 at 10:30 AM Kurt Roeckx <ku...@roeckx.be> wrote:
We're really talking about 2 related issues: What is the interval
(or period), and when does the certificate expire.

RFC 5280 talks about an interval and period. An interval would be
something like "1 second" or "1 day", but it gets represented by 2
timestamps. The most natural way to calculate the interval from
those 2 timestamps is the difference between them, so 10:00:01 -
10:00:00 would be 1 second. There are people who argue
that the text in RFC 5280 says that that is actually a 2 second
interval, which I don't agree with.

Given that this has repeatedly - for a number of years and over multiple CA incidents - been the understanding reached by multiple CAs, root programs, implementations, can you clarify why you disagree with this?

I think your message accurately captures that the interval for validity is “one second” (that is, interval, not instant), and the certificate is valid on its notAfter. I’m hoping you can explain your rationale for why you don’t agree with it.

This is more obvious when you ask the question:
- Is a certificate whose validity interval is expressed as { 20210610000000 , 20210610000000 } ever valid, and if so, when?

Unambiguously, it seems we all agree that yes, it is minimally valid at exactly 20210601000000 - that is, it’s valid until the first instant *after* the notAfter. Hopefully that is not in question.

Now, how would a CA express a notAfter of 00:00:00.51? You suggest that “round up” (`round()`) may be the right answer, but we know from DER encoding rules how such a timestamp is expressed: we only express to the second granularity (`floor()`). Any rounding that happens, if any, does not happen inside the encoder. Thus, if a issuing CA wants to express such a certificate, it is represented as 20210601000000.

If we apply that same encoding logic to 00:00:00.999…, then under X.509 at least, with GeneralizedTime’s DER encoding, we can actually represent that: as ‘20210601000000.999999999’ (well, to as many bytes as we’re willing to express our precision). On the other hand, RFC 2459 (and subsequent) profiles that out, saying fractional seconds are prohibited.

This ensures that the GeneralizedTime of such a value, according to RFC 2459, matches the same precision as UTCTime’s encoding of such a value: 20210601000000Z vs 210601000000Z. While X.509 permits sender’s (CA’s) choice, 2459 imposes further profile restrictions on valid ranges and timezones.

Thus, even when the interval between the CA’s determination is { 0, 0.999… } - an interval of 1 - the encoded expression is still { 0, 0 } - also an interval of 1 - because that’s the encoded precision, and explicitly stated as such in 2459.

That is strictly from the encoding perspective; that is, no such granularity is permitted to be expressed. It would appear you’re making an argument that the application should, however, prior to submitting to the encoder, adjust to the granularity so desired. That is, just because the encoder doesn’t round, doesn’t mean the implementation can’t or shouldn’t.

On its own, this would be a very tempting and arguably logical argument to make, and I do see it’s appeal. But lacking in that context is that ASN.1 was designed to abstractly support multiple different encodings, and encoding rules, and was designed in mind so that the application would not need to know of such conversions that occur.

Put differently: the same “logical” structure can be represented in DER, BER, CER, XER, PER - and should still lend, to the peer, the same logical representation. Indeed, this precise point came up often in the first two decades of X.509 deployment: whether the TBSCertificate itself was permitted to be BER, while the relying party would be responsible for canonicalizing to DER for the purpose of signature verification.

Without rehashing or litigating that debate, the question was primarily about the interaction of X.509 when included within other, larger PDUs, and how the X.509 structure was represented when encapsulated by such a PDU whose wire form was not DER.

In the PKIX context (i.e. the provisions of 2459), this was settled as expecting/expressing that protocols (such as TLS) consistently use the DER profile. Other protocols that make use of BER (e.g. LDAP, CMS, Kerberos) were similarly expected to transmit the certificate-as-DER (which is valid BER). AFAIK, no other mainstream IETF protocol uses the more eclectic encodings with certificates, but happy to be proved wrong here.

Ryan Sleevi

unread,
Jun 10, 2021, 11:40:11 AM6/10/21
to Peter Gutmann, Michel Le Bihan, Ryan Sleevi, aa...@letsencrypt.org, dev-secur...@mozilla.org, ku...@roeckx.be
On Thu, Jun 10, 2021 at 6:22 AM Peter Gutmann <pgu...@cs.auckland.ac.nz> wrote:
When you're playing compliance bingo, it's very important to pay attention to
every little detail.  In particular running around to make sure every mouse
hole in the barn is patched distracts from the fact that one of the barn walls
is missing.

Can you clarify which barn wall you feel is missing?

Matt Palmer

unread,
Jun 10, 2021, 11:48:12 AM6/10/21
to dev-secur...@mozilla.org
On Wed, Jun 09, 2021 at 05:28:21PM -0700, Aaron Gable wrote:
> Suppose a certificate has a notAfter field with the value Aug 5 19:42:04
> 2021 GMT. Suppose further that the current time is Aug 5 19:42:04.005 2021
> GMT, five milliseconds after the beginning of the :04 second. Is that
> certificate valid, or expired?

Expired. Common usage in English is that any parts of a time not specified
are implied to be 0. "The assignment deadline is 2pm Thursday" does not
ordinarily imply that submissions will be accepted at 2:59pm.

- Matt

Matthias Merkel

unread,
Jun 10, 2021, 11:48:52 AM6/10/21
to dev-secur...@mozilla.org, michel.le...@gmail.com, ku...@roeckx.be, aa...@letsencrypt.org, dev-secur...@mozilla.org, Ryan Sleevi
This is important in the context of this incident involving Let's Encrypt: https://bugzilla.mozilla.org/show_bug.cgi?id=1715455

As I see it, having an extra one second validity does not cause any real issues. The issue is that, depending on the definition of certificate lifetime, the certificates violate Let's Encrypt's CPS and are thus misissued. To avoid these incidents in the future, it is now important to clarify this.

Grzegorz Prusak

unread,
Jun 10, 2021, 11:49:13 AM6/10/21
to Kurt Roeckx, Ryan Sleevi, Aaron Gable, dev-secur...@mozilla.org

RFC5280 only allows a resolution of seconds.

OpenSSL uses a time_t, which also has a resolution of seconds. I
assume most other implementations also have a resolution of 1
second.

The fact that a type has some limited granularity, doesn't mean that values of that type represent intervals. It just means that you are just unable to represent some of the time values in a given representation.

If you would treat the values as intervals, then any conversion between the types of different granularity would be impossible, as values in different types would represent intervals of different length.

Aaron Gable

unread,
Jun 10, 2021, 12:01:22 PM6/10/21
to Ryan Sleevi, Kurt Roeckx, Grzegorz Prusak, dev-secur...@mozilla.org
I had a much longer response half-drafted last night, but the conversation has mostly moved on from noting which specific implementations treat timestamps as instants or intervals. Instead, I'll say this:

To me, the discussion on this thread leads to the conclusion that there is not a single incontrovertible interpretation of the baseline requirements, RFC 5280, and other relevant standards.

Ryan, I greatly appreciate your depth of knowledge and context on this issue. But fundamentally, none of the ideas you express -- that timestamp comparisons should be made textually, that the system time should be truncated to the same granularity as the notAfter timestamp, that notAfter timestamps are taken to be intervals and not instants -- none of these made it into the relevant standards or requirements. Not that I have been able to find, and not that you have linked in this thread. You are talking about one possible interpretation of the relevant text, not the only possible interpretation. There is ample evidence that the other interpretation has also been in common use (see: NSS, Chrome, Java's bouncycastle library, Python's certvalidator library, and more). If truncating the current time prior to comparison is the correct approach, then that should be standardized. If adding the entirety of the trailing second to the validity period is required, then that should be standardized. Neither of these is the case.

On Thu, Jun 10, 2021 at 8:33 AM Ryan Sleevi <ry...@sleevi.com> wrote:

Unambiguously, it seems we all agree that yes, it is minimally valid at exactly 20210601000000 - that is, it’s valid until the first instant *after* the notAfter. Hopefully that is not in question.

Yes, agreed.
 
Now, how would a CA express a notAfter of 00:00:00.51?

It would not. As per RFC 5280, notAfter timestamps have only second-level granularity. The remainder of this argument, which is about how one might represent a more precise value in ASN.1, is not relevant because there is no requirement anywhere that the local system time also be represented in the same format before comparisons are made.

On Thu, Jun 10, 2021 at 8:48 AM Matt Palmer <mpa...@hezmatt.org> wrote:
Expired.  Common usage in English is that any parts of a time not specified
are implied to be 0.  "The assignment deadline is 2pm Thursday" does not
ordinarily imply that submissions will be accepted at 2:59pm.

All of the context in the world does not change the fact that, in common interpretation, 14:59:00.005 is not "included" in 14:59:00, it is after 14:59:00.

Aaron

Postscript: The existence of the OpenSSL bug referenced earlier (https://github.com/openssl/openssl/issues/15124) does not have any bearing on this conversation: according to the bug, OpenSSL is wrong no matter whether we interpret notAfter as an instant or an interval (it says "not valid" even when the timestamps match exactly) so it is not prior art one way or the other.

Peter Gutmann

unread,
Jun 10, 2021, 12:39:57 PM6/10/21
to ry...@sleevi.com, Michel Le Bihan, aa...@letsencrypt.org, dev-secur...@mozilla.org, ku...@roeckx.be
Ryan Sleevi <ry...@sleevi.com> writes:

>Can you clarify which barn wall you feel is missing?

In commercial PKI? How much time do you have?

(The previous message was a generic tongue-in-cheek comment, not meant to be a
starter for any further discussion).

Peter.

Ryan Sleevi

unread,
Jun 10, 2021, 1:01:00 PM6/10/21
to Aaron Gable, Ryan Sleevi, Kurt Roeckx, Grzegorz Prusak, dev-secur...@mozilla.org
On Thu, Jun 10, 2021 at 12:01 PM Aaron Gable <aa...@letsencrypt.org> wrote:
If truncating the current time prior to comparison is the correct approach, then that should be standardized. If adding the entirety of the trailing second to the validity period is required, then that should be standardized. Neither of these is the case.

I appreciate this statement, and to be clear, it's not an entirely unreasonable position to take. I certainly don't want to be seen as defending the text we have as *good* text, because you'll find me first in line to hand out beatings to X.509/RFC 5280 :P

However, I do want to highlight that there's a stark parallel here to the handling of extendedKeyUsages on intermediate certificates.
  • Lack of unambiguous standard with respect to the processing behaviour
  • Two camps of implementations: those who do respect it (and constrain a certificate) and those who do not respect it (and leave the certificate unconstrained)
  • The issue was known about since the late 90s/early 00s ("instant/interval" vs "EKU")
  • Previous discussion on m.d.s.p. on the interpretation
  • Previous CA incidents on the interpretation
With respect to incident management, these latter two points are the most concerning parts of the Let's Encrypt incident, independent of any interpretation issues, and thus cannot be dismissed or addressed by referencing this thread or any disagreements with the interpretation. To be clear, I'm not trying to accuse you of doing so, but wanting to make sure there's alignment on understanding the concerns here for the inter-relationship between the thread and the bug, particularly for the newer participants (welcome!).

With respect to how to address this going forward, we've certainly seen in the past policy clarifications applied to root store policies, and attempts to attempt to bring that clarity to the BRs. With EKU, we saw past attempts blocked by some Forum members, and present attempts being pursued in the Profiles work, so hopefully there's an opportunity to improve that work going forward.

Ryan Sleevi

unread,
Jun 10, 2021, 1:20:42 PM6/10/21
to Peter Gutmann, ry...@sleevi.com, Michel Le Bihan, aa...@letsencrypt.org, dev-secur...@mozilla.org, ku...@roeckx.be
On Thu, Jun 10, 2021 at 12:39 PM Peter Gutmann <pgu...@cs.auckland.ac.nz> wrote:
(The previous message was a generic tongue-in-cheek comment, not meant to be a
starter for any further discussion).

Thanks for clarifying. I mostly wanted to make sure you didn't feel this community was overlooking any concrete suggestions for improvements that you felt would have more pressing urgency. 

Aaron Gable

unread,
Jun 11, 2021, 12:56:51 PM6/11/21
to Ryan Sleevi, Kurt Roeckx, Grzegorz Prusak, dev-secur...@mozilla.org
On Thu, Jun 10, 2021 at 10:00 AM Ryan Sleevi <ry...@sleevi.com> wrote:
With respect to incident management, these latter two points are the most concerning parts of the Let's Encrypt incident, independent of any interpretation issues, and thus cannot be dismissed or addressed by referencing this thread or any disagreements with the interpretation. To be clear, I'm not trying to accuse you of doing so, but wanting to make sure there's alignment on understanding the concerns here for the inter-relationship between the thread and the bug, particularly for the newer participants (welcome!).

Yes, agreed. My purpose here is not to argue that the current reports in bugzilla are not incidents -- they clearly are, as set by the precedent of the similar KIR incident. My (personal, I am not speaking on behalf of Let's Encrypt here, all official statements are occurring on those bugs) argument is simply that they are incidents due to that precedent, not due to an unambiguous reading of the standards and requirements. And that therefore the standards and requirements should be clarified. Preferably in the direction of the reading that I find to be intuitive, of course :)

Aaron
Reply all
Reply to author
Forward
0 new messages