Doc. no. J16/01-0052 = WG21 N1337
Date: 09 Nov 2001
Project: Programming Language C++
Reply to: Matt Austern <austern@research.att.com>

C++ Standard Library Active Issues List (Revision 20)

Reference ISO/IEC IS 14882:1998(E)

Also see:

The purpose of this document is to record the status of issues which have come before the Library Working Group (LWG) of the ANSI (J16) and ISO (WG21) C++ Standards Committee. Issues represent potential defects in the ISO/IEC IS 14882:1998(E) document. Issues are not to be used to request new features or other extensions.

This document contains only library issues which are actively being considered by the Library Working Group. That is, issues which have a status of New, Open, Ready, and Review. See Library Defect Reports List for issues considered defects and Library Closed Issues List for issues considered closed.

The issues in these lists are not necessarily formal ISO Defect Reports (DR's). While some issues will eventually be elevated to official Defect Report status, other issues will be disposed of in other ways. See Issue Status.

This document is in an experimental format designed for both viewing via a world-wide web browser and hard-copy printing. It is available as an HTML file for browsing or PDF file for printing.

Prior to Revision 14, library issues lists existed in two slightly different versions; a Committee Version and a Public Version. Beginning with Revision 14 the two versions were combined into a single version.

This document includes [bracketed italicized notes] as a reminder to the LWG of current progress on issues. Such notes are strictly unofficial and should be read with caution as they may be incomplete or incorrect. Be aware that LWG support for a particular resolution can quickly change if new viewpoints or killer examples are presented in subsequent discussions.

For the most current official version of this document see http://www.dkuug.dk/jtc1/sc22/wg21. Requests for further information about this document should include the document number above, reference ISO/IEC 14882:1998(E), and be submitted to Information Technology Industry Council (ITI), 1250 Eye Street NW, Washington, DC 20005.

Public information as to how to obtain a copy of the C++ Standard, join the standards committee, submit an issue, or comment on an issue can be found in the C++ FAQ at http://www.research.att.com/~austern/csc/faq.html. Public discussion of C++ Standard related issues occurs on news:comp.std.c++.

For committee members, files available on the committee's private web site include the HTML version of the Standard itself. HTML hyperlinks from this issues list to those files will only work for committee members who have downloaded them into the same disk directory as the issues list files.

Revision History

Issue Status

New - The issue has not yet been reviewed by the LWG. Any Proposed Resolution is purely a suggestion from the issue submitter, and should not be construed as the view of LWG.

Open - The LWG has discussed the issue but is not yet ready to move the issue forward. There are several possible reasons for open status:

A Proposed Resolution for an open issue is still not be construed as the view of LWG. Comments on the current state of discussions are often given at the end of open issues in an italic font. Such comments are for information only and should not be given undue importance.

Dup - The LWG has reached consensus that the issue is a duplicate of another issue, and will not be further dealt with. A Rationale identities the duplicated issue's issue number.

NAD - The LWG has reached consensus that the issue is not a defect in the Standard, and the issue is ready to forward to the full committee as a proposed record of response. A Rationale discusses the LWG's reasoning.

Review - Exact wording of a Proposed Resolution is now available for review on an issue for which the LWG previously reached informal consensus.

Ready - The LWG has reached consensus that the issue is a defect in the Standard, the Proposed Resolution is correct, and the issue is ready to forward to the full committee for further action as a Defect Report (DR).

DR - (Defect Report) - The full J16 committee has voted to forward the issue to the Project Editor to be processed as a Potential Defect Report. The Project Editor reviews the issue, and then forwards it to the WG21 Convenor, who returns it to the full committee for final disposition. This issues list accords the status of DR to all these Defect Reports regardless of where they are in that process.

TC - (Technical Corrigenda) - The full WG21 committee has voted to accept the Defect Report's Proposed Resolution as a Technical Corrigenda. Action on this issue is thus complete and no further action is possible under ISO rules.

RR - (Record of Response) - The full WG21 committee has determined that this issue is not a defect in the Standard. Action on this issue is thus complete and no further action is possible under ISO rules.

Future - In addition to the regular status, the LWG believes that this issue should be revisited at the next revision of the standard. It is usually paired with NAD.

Issues are always given the status of New when they first appear on the issues list. They may progress to Open or Review while the LWG is actively working on them. When the LWG has reached consensus on the disposition of an issue, the status will then change to Dup, NAD, or Ready as appropriate. Once the full J16 committee votes to forward Ready issues to the Project Editor, they are given the status of Defect Report ( DR). These in turn may become the basis for Technical Corrigenda (TC), or are closed without action other than a Record of Response (RR ). The intent of this LWG process is that only issues which are truly defects in the Standard move to the formal ISO DR status.

Active Issues


23. Num_get overflow result

Section: 22.2.2.1.2 [lib.facet.num.get.virtuals]  Status: Open  Submitter: Nathan Myers  Date: 6 Aug 1998

The current description of numeric input does not account for the possibility of overflow. This is an implicit result of changing the description to rely on the definition of scanf() (which fails to report overflow), and conflicts with the documented behavior of traditional and current implementations.

Users expect, when reading a character sequence that results in a value unrepresentable in the specified type, to have an error reported. The standard as written does not permit this.

Further comments from Dietmar:

I don't feel comfortable with the proposed resolution to issue 23: It kind of simplifies the issue to much. Here is what is going on:

Currently, the behavior of numeric overflow is rather counter intuitive and hard to trace, so I will describe it briefly:

Further discussion from Redmond:

The basic problem is that we've defined our behavior, including our error-reporting behavior, in terms of C90. However, C90's method of reporting overflow in scanf is not technically an "input error". The strto_* functions are more precise.

There was general consensus that failbit should be set upon overflow. We considered three options based on this:

  1. Set failbit upon conversion error (including overflow), and don't store any value.
  2. Set failbit upon conversion error, and also set errno to indicated the precise nature of the error.
  3. Set failbit upon conversion error. If the error was due to overflow, store +-numeric_limits<T>::max() as an overflow indication.

Straw poll: (1) 5; (2) 0; (3) 8.

PJP will provide wording.

Proposed resolution:


44. Iostreams use operator== on int_type values

Section: 27 [lib.input.output]  Status: Open  Submitter: Nathan Myers  Date: 6 Aug 1998

Many of the specifications for iostreams specify that character values or their int_type equivalents are compared using operators == or !=, though in other places traits::eq() or traits::eq_int_type is specified to be used throughout. This is an inconsistency; we should change uses of == and != to use the traits members instead.

Proposed resolution:

[Kona: Nathan to supply proposed wording]

[ Tokyo: the LWG reaffirmed that this is a defect, and requires careful review of clause 27 as the changes are context sensitive. ]


76. Can a codecvt facet always convert one internal character at a time?

Section: 22.2.1.5 [lib.locale.codecvt]  Status: Ready  Submitter: Matt Austern  Date: 25 Sep 1998

This issue concerns the requirements on classes derived from codecvt, including user-defined classes. What are the restrictions on the conversion from external characters (e.g. char) to internal characters (e.g. wchar_t)? Or, alternatively, what assumptions about codecvt facets can the I/O library make?

The question is whether it's possible to convert from internal characters to external characters one internal character at a time, and whether, given a valid sequence of external characters, it's possible to pick off internal characters one at a time. Or, to put it differently: given a sequence of external characters and the corresponding sequence of internal characters, does a position in the internal sequence correspond to some position in the external sequence?

To make this concrete, suppose that [first, last) is a sequence of M external characters and that [ifirst, ilast) is the corresponding sequence of N internal characters, where N > 1. That is, my_encoding.in(), applied to [first, last), yields [ifirst, ilast). Now the question: does there necessarily exist a subsequence of external characters, [first, last_1), such that the corresponding sequence of internal characters is the single character *ifirst?

(What a "no" answer would mean is that my_encoding translates sequences only as blocks. There's a sequence of M external characters that maps to a sequence of N internal characters, but that external sequence has no subsequence that maps to N-1 internal characters.)

Some of the wording in the standard, such as the description of codecvt::do_max_length (22.2.1.5.2 [lib.locale.codecvt.virtuals], paragraph 11) and basic_filebuf::underflow (27.8.1.4 [lib.filebuf.virtuals], paragraph 3) suggests that it must always be possible to pick off internal characters one at a time from a sequence of external characters. However, this is never explicitly stated one way or the other.

This issue seems (and is) quite technical, but it is important if we expect users to provide their own encoding facets. This is an area where the standard library calls user-supplied code, so a well-defined set of requirements for the user-supplied code is crucial. Users must be aware of the assumptions that the library makes. This issue affects positioning operations on basic_filebuf, unbuffered input, and several of codecvt's member functions.

Proposed resolution:

Add the following text as a new paragraph, following 22.2.1.5.2 [lib.locale.codecvt.virtuals] paragraph 2:

A codecvt facet that is used by basic_filebuf (27.8 [lib.file.streams]) must have the property that if

    do_out(state, from, from_end, from_next, to, to_lim, to_next)
would return ok, where from != from_end, then
    do_out(state, from, from + 1, from_next, to, to_end, to_next)
must also return ok, and that if
    do_in(state, from, from_end, from_next, to, to_lim, to_next)
would return ok, where to != to_lim, then
    do_in(state, from, from_end, from_next, to, to + 1, to_next)

must also return ok. [Footnote: Informally, this means that basic_filebuf assumes that the mapping from internal to external characters is 1 to N: a codecvt that is used by basic_filebuf must be able to translate characters one internal character at a time. --End Footnote]

[Redmond: Minor change in proposed resolution. Original proposed resolution talked about "success", with a parenthetical comment that success meant returning ok. New wording removes all talk about "success", and just talks about the return value.]

Rationale:

The proposed resoluion says that conversions can be performed one internal character at a time. This rules out some encodings that would otherwise be legal. The alternative answer would mean there would be some internal positions that do not correspond to any external file position.

An example of an encoding that this rules out is one where the internT and externT are of the same type, and where the internal sequence c1 c2 corresponds to the external sequence c2 c1.

It was generally agreed that basic_filebuf relies on this property: it was designed under the assumption that the external-to-internal mapping is N-to-1, and it is not clear that basic_filebuf is implementable without that restriction.

The proposed resolution is expressed as a restriction on codecvt when used by basic_filebuf, rather than a blanket restriction on all codecvt facets, because basic_filebuf is the only other part of the library that uses codecvt. If a user wants to define a codecvt facet that implements a more general N-to-M mapping, there is no reason to prohibit it, so long as the user does not expect basic_filebuf to be able to use it.


91. Description of operator>> and getline() for string<> might cause endless loop

Section: 21.3.7.9 [lib.string.io]  Status: Review  Submitter: Nico Josuttis  Date: 29 Sep 1998

Operator >> and getline() for strings read until eof() in the input stream is true. However, this might never happen, if the stream can't read anymore without reaching EOF. So shouldn't it be changed into that it reads until !good() ?

Proposed resolution:

In 21.3.7.9 [lib.string.io], paragraph 1, replace:

Effects: Begins by constructing a sentry object k as if k were constructed by typename basic_istream<charT,traits>::sentry k( is). If bool( k) is true, it calls str.erase() and then extracts characters from is and appends them to str as if by calling str.append(1, c). If is.width() is greater than zero, the maximum number n of characters appended is is.width(); otherwise n is str.max_size(). Characters are extracted and appended until any of the following occurs:

with:

Effects: Behaves as a formatted input function (27.6.1.2.1 [lib.istream.formatted.reqmts]). After constructing a sentry object, if the sentry converts to true, calls str.erase() and then extracts characters from is and appends them to str as if by calling str.append(1,c). If is.width() is greater than zero, the maximum number n of characters appended is is.width(); otherwise n is str.max_size(). Characters are extracted and appended until any of the following occurs:

In 21.3.7.9 [lib.string.io], paragraph 6, replace

Effects: Begins by constructing a sentry object k as if by typename basic_istream<charT,traits>::sentry k( is, true). If bool( k) is true, it calls str.erase() and then extracts characters from is and appends them to str as if by calling str.append(1, c) until any of the following occurs:

with:

Effects: Behaves as an unformatted input function (27.6.1.3 [lib.istream.unformatted]), except that it does not affect the value returned by subsequent calls to basic_istream<>::gcount(). After constructing a sentry object, if the sentry converts to true, calls str.erase() and then extracts characters from is and appends them to str as if by calling str.append(1,c) until any of the following occurs:

[Redmond: Made changes in proposed resolution. operator>> should be a formatted input function, not an unformatted input function. getline should not be required to set gcount, since there is no mechanism for gcount to be set except by one of basic_istream's member functions.]

Rationale:

The real issue here is whether or not these string input functions get their characters from a streambuf, rather than by calling an istream's member functions, a streambuf signals failure either by returning eof or by throwing an exception; there are no other possibilities. The proposed resolution makes it clear that these two functions do get characters from a streambuf.


92. Incomplete Algorithm Requirements

Section: 25 [lib.algorithms]  Status: Open  Submitter: Nico Josuttis  Date: 29 Sep 1998

The standard does not state, how often a function object is copied, called, or the order of calls inside an algorithm. This may lead to surprising/buggy behavior. Consider the following example:

class Nth {    // function object that returns true for the nth element 
  private: 
    int nth;     // element to return true for 
    int count;   // element counter 
  public: 
    Nth (int n) : nth(n), count(0) { 
    } 
    bool operator() (int) { 
        return ++count == nth; 
    } 
}; 
.... 
// remove third element 
    list<int>::iterator pos; 
    pos = remove_if(coll.begin(),coll.end(),  // range 
                    Nth(3)),                  // remove criterion 
    coll.erase(pos,coll.end()); 

This call, in fact removes the 3rd AND the 6th element. This happens because the usual implementation of the algorithm copies the function object internally:

template <class ForwIter, class Predicate> 
ForwIter std::remove_if(ForwIter beg, ForwIter end, Predicate op) 
{ 
    beg = find_if(beg, end, op); 
    if (beg == end) { 
        return beg; 
    } 
    else { 
        ForwIter next = beg; 
        return remove_copy_if(++next, end, beg, op); 
    } 
} 

The algorithm uses find_if() to find the first element that should be removed. However, it then uses a copy of the passed function object to process the resulting elements (if any). Here, Nth is used again and removes also the sixth element. This behavior compromises the advantage of function objects being able to have a state. Without any cost it could be avoided (just implement it directly instead of calling find_if()).

Proposed resolution:

In [lib.function.objects] 20.3 Function objects add as new paragraph 6 (or insert after paragraph 1):

Option 1:

Predicates are functions or function objects that fulfill the following requirements:
  - They return a Boolean value (bool or a value convertible to bool)
  - It doesn't matter for the behavior of a predicate how often it is copied or assigned and how often it is called.

Option 2:

- if it's a function:
  - All calls with the same argument values yield the same result.
- if it's a function object:
  - In any sequence of calls to operator () without calling any non-constant member function, all calls with the same argument values yield the same result. 
- After an assignment or copy both objects return the same result for the same values.

[Santa Cruz: The LWG believes that there may be more to this than meets the eye. It applies to all function objects, particularly predicates. Two questions: (1) must a function object be copyable? (2) how many times is a function object called?  These are in effect questions about state.  Function objects appear to require special copy semantics to make state work, and may fail if calling alters state and calling occurs an unexpected number of times.]

[Dublin: Pete Becker felt that this may not be a defect, but rather something that programmers need to be educated about. There was discussion of adding wording to the effect that the number and order of calls to function objects, including predicates, not affect the behavior of the function object.]

[Pre-Kona: Nico comments: It seems the problem is that we don't have a clear statement of "predicate" in the standard. People including me seemed to think "a function returning a Boolean value and being able to be called by an STL algorithm or be used as sorting criterion or ... is a predicate". But a predicate has more requirements: It should never change its behavior due to a call or being copied. IMHO we have to state this in the standard. If you like, see section 8.1.4 of my library book for a detailed discussion.]

[Kona: Nico will provide wording to the effect that "unless otherwise specified, the number of copies of and calls to function objects by algorithms is unspecified".  Consider placing in 25 [lib.algorithms] after paragraph 9.]

[Pre-Tokyo: Angelika Langer comments: if the resolution is that algorithms are free to copy and pass around any function objects, then it is a valid question whether they are also allowed to change the type information from reference type to value type.]

[Tokyo: Nico will discuss this further with Matt as there are multiple problems beyond the underlying problem of no definition of "Predicate".]

[Post-Tokyo: Nico provided the above proposed resolutions.]


96. Vector<bool> is not a container

Section: 23.2.5 [lib.vector.bool]  Status: Open  Submitter: AFNOR  Date: 7 Oct 1998

vector<bool> is not a container as its reference and pointer types are not references and pointers.

Also it forces everyone to have a space optimization instead of a speed one.

See also: 99-0008 == N1185 Vector<bool> is Nonconforming, Forces Optimization Choice.

Proposed resolution:

[In Santa Cruz the LWG felt that this was Not A Defect.]

[In Dublin many present felt that failure to meet Container requirements was a defect. There was disagreement as to whether or not the optimization requirements constituted a defect.]

[The LWG looked at the following resolutions in some detail:
     * Not A Defect.
     * Add a note explaining that vector<bool> does not meet Container requirements.
     * Remove vector<bool>.
     * Add a new category of container requirements which vector<bool> would meet.
     * Rename vector<bool>.

No alternative had strong, wide-spread, support and every alternative had at least one "over my dead body" response.

There was also mention of a transition scheme something like (1) add vector_bool and deprecate vector<bool> in the next standard. (2) Remove vector<bool> in the following standard.]

[Modifying container requirements to permit returning proxies (thus allowing container requirements conforming vector<bool>) was also discussed.]

[It was also noted that there is a partial but ugly workaround in that vector<bool> may be further specialized with a customer allocator.]

[Kona: Herb Sutter presented his paper J16/99-0035==WG21/N1211, vector<bool>: More Problems, Better Solutions. Much discussion of a two step approach: a) deprecate, b) provide replacement under a new name. LWG straw vote on that: 1-favor, 11-could live with, 2-over my dead body. This resolution was mentioned in the LWG report to the full committee, where several additional committee members indicated over-my-dead-body positions.]

[Tokyo: Not discussed by the full LWG; no one claimed new insights and so time was more productively spent on other issues. In private discussions it was asserted that requirements for any solution include 1) Increasing the full committee's understanding of the problem, and 2) providing compiler vendors, authors, teachers, and of course users with specific suggestions as to how to apply the eventual solution.]


98. Input iterator requirements are badly written

Section: 24.1.1 [lib.input.iterators]  Status: Open  Submitter: AFNOR  Date: 7 Oct 1998

Table 72 in 24.1.1 [lib.input.iterators] specifies semantics for *r++ of:

   { T tmp = *r; ++r; return tmp; }

There are two problems with this. First, the return type is specified to be "T", as opposed to something like "convertible to T". This is too specific: we want to allow *r++ to return an lvalue.

Second, writing the semantics in terms of code misleadingly suggests that the effects *r++ should precisely replicate the behavior of this code, including side effects. (What if it's a user-defined type whose copy constructor has observable behavior?) We should replace the code with words, or else put some blanket statement in clause 17 saying that code samples aren't intended to specify exactly how many times a copy constructor is called, even if the copy constructor has observable behavior. (See issue 334 for a similar problem.)

Proposed resolution:


120. Can an implementor add specializations?

Section: 17.4.3.1 [lib.reserved.names]  Status: Open  Submitter: Judy Ward  Date: 15 Dec 1998

The original issue asked whether a library implementor could specialize standard library templates for built-in types. (This was an issue because users are permitted to explicitly instantiate standard library templates.)

Specializations are no longer a problem, because of the resolution to core issue 259. Under the proposed resolution, it will be legal for a translation unit to contain both a specialization and an explicit instantiation of the same template, provided that the specialization comes first. In such a case, the explicit instantiation will be ignored. Further discussion of library issue 120 assumes that the core 259 resolution will be adopted.

However, as noted in lib-7047, one piece of this issue still remains: what happens if a standard library implementor explicitly instantiates a standard library templates? It's illegal for a program to contain two different explicit instantiations of the same template for the same type in two different translation units (ODR violation), and the core working group doesn't believe it is practical to relax that restriction.

The issue, then, is: are users allowed to implicitly instantiate standard library templates for non-user defined types? The status quo answer is 'yes'. Changing it to 'no' would give library implementors more freedom.

This is an issue because, for performance reasons, library implementors often need to explicitly instantiate standard library templates. (for example, std::basic_string<char>) Does giving users freedom to explicitly instantiate standard library templates for non-user defined types make it impossible or painfully difficult for library implementors to do this?

John Spicer suggests, in lib-8957, that library implementors have a mechanism they can use for explicit instantiations that doesn't prevent users from performing their own explicit instantiations: put each explicit instantiation in its own object file. (Different solutions might be necessary for Unix DSOs or MS-Windows DLLs.) On some platforms, library implementors might not need to do anything special: the "undefined behavior" that results from having two different explicit instantiations might be harmless.

Proposed resolution:

Option 1.

Append to 17.4.3.1 [lib.reserved.names] paragraph 1:

A program may explicitly instantiate any templates in the standard library only if the declaration depends on a user-defined name of external linkage and the instantiation meets the standard library requirements for the original template.

Option 2.

In light of the resolution to core issue 259, no normative changes in the library clauses are necessary. Add the following non-normative note to the end of 17.4.3.1 [lib.reserved.names] paragraph 1:

[Note: A program may explicitly instantiate standard library templates, even when an explicit instantiation does not depend on a user-defined name. --end note]

[Copenhagen: LWG discussed three options. (A) Users may not explicitly instantiate standard library templates, except on user-defined types. Consequence: library implementors may freely specialize or instantiate templates. (B) It is implementation defined whether users may explicitly instantiate standard library templates on non-user-defined types. Consequence: library implementors may freely specialize or instantiate templates, but may need to document some or all templates that have been explicitly instantiated. (C) Users may explicitly instantiate any standard library template. ]

[Straw poll (first number is favor, second is strongly oppose): A - 4, 0; B - 0, 9; C - 9, 1. Proposed resolution 1, above, is option A. (It is the original proposed resolution.) Proposed resolution 2, above, is option C. Because there was no support for option B, no wording is provided.]

[Redmond: discussed again; straw poll had results similar to those of Copenhagen (A - 1, 3; B - 6, 2; C - 8, 4). Most people said they could live with any option. The only objection to option A is potential implementation difficulty. Steve Clamage volunteered do a survey to see if there are any popular platforms where option A would present a real problem for implementors. See his reflector message, c++std-lib-9002. ]


123. Should valarray helper arrays fill functions be const?

Section: 26.3.5.4 [lib.slice.arr.fill], 26.3.7.4 [lib.gslice.array.fill], 26.3.8.4 [lib.mask.array.fill], 26.3.9.4 [lib.indirect.array.fill]  Status: Review  Submitter: Judy Ward  Date: 15 Dec 1998

One of the operator= in the valarray helper arrays is const and one is not. For example, look at slice_array. This operator= in Section 26.3.5.2 [lib.slice.arr.assign] is const:

    void operator=(const valarray<T>&) const;

but this one in Section 26.3.5.4 [lib.slice.arr.fill] is not:

    void operator=(const T&);

The description of the semantics for these two functions is similar.

Proposed resolution:

26.3.5 [lib.template.slice.array] Template class slice_array

In the class template definition for slice_array, replace the member function declaration

      void operator=(const T&);
    

with

      void operator=(const T&) const;
    

26.3.5.4 [lib.slice.arr.fill] slice_array fill function

Change the function declaration

      void operator=(const T&);
    

to

      void operator=(const T&) const;
    

26.3.7 [lib.template.gslice.array] Template class gslice_array

In the class template definition for gslice_array, replace the member function declaration

      void operator=(const T&);
    

with

      void operator=(const T&) const;
    

26.3.7.4 [lib.gslice.array.fill] gslice_array fill function

Change the function declaration

      void operator=(const T&);
    

to

      void operator=(const T&) const;
    

26.3.8 [lib.template.mask.array] Template class mask_array

In the class template definition for mask_array, replace the member function declaration

      void operator=(const T&);
    

with

      void operator=(const T&) const;
    

26.3.8.4 [lib.mask.array.fill] mask_array fill function

Change the function declaration

      void operator=(const T&);
    

to

      void operator=(const T&) const;
    

26.3.9 [lib.template.indirect.array] Template class indirect_array

In the class template definition for indirect_array, replace the member function declaration

      void operator=(const T&);
    

with

      void operator=(const T&) const;
    

26.3.9.4 [lib.indirect.array.fill] indirect_array fill function

Change the function declaration

      void operator=(const T&);
    

to

      void operator=(const T&) const;
    

[Redmond: Robert provided wording.]

Rationale:

There's no good reason for one version of operator= being const and another one not. Because of issue 253, this now matters: these functions are now callable in more circumstances. In many existing implementations, both versions are already const.


167. Improper use of traits_type::length()

Section: 27.6.2.5.4 [lib.ostream.inserters.character]  Status: Open  Submitter: Dietmar Kühl  Date: 20 Jul 1999

Paragraph 4 states that the length is determined using traits::length(s). Unfortunately, this function is not defined for example if the character type is wchar_t and the type of s is char const*. Similar problems exist if the character type is char and the type of s is either signed char const* or unsigned char const*.

Proposed resolution:

Change 27.6.2.5.4 [lib.ostream.inserters.character] paragraph 4 from:

Effects: Behaves like an formatted inserter (as described in lib.ostream.formatted.reqmts) of out. After a sentry object is constructed it inserts characters. The number of characters starting at s to be inserted is traits::length(s). Padding is determined as described in lib.facet.num.put.virtuals. The traits::length(s) characters starting at s are widened using out.widen (lib.basic.ios.members). The widened characters and any required padding are inserted into out. Calls width(0).

to:

Effects: Behaves like an formatted inserter (as described in lib.ostream.formatted.reqmts) of out. After a sentry object is constructed it inserts characters. The number len of characters starting at s to be inserted is

- traits::length((const char*)s) if the second argument is of type const charT*
- char_traits<char>::length(s) if the second argument is of type const char*, const signed char*, or const unsigned char* and and charT is not char.

Padding is determined as described in lib.facet.num.put.virtuals. The len characters starting at s are widened using out.widen (lib.basic.ios.members). The widened characters and any required padding are inserted into out. Calls width(0).

[Kona: It is clear to the LWG there is a defect here. Dietmar will supply specific wording.]

[Post-Tokyo: Dietmar supplied the above wording.]

[Toronto: The original proposed resolution involved char_traits<signed char> and char_traits<unsigned char>. There was strong opposition to requiring that library implementors provide those specializations of char_traits.]

[Copenhagen: This still isn't quite right: proposed resolution text got garbled when the signed char/unsigned char specializations were removed. Dietmar will provide revised wording.]


179. Comparison of const_iterators to iterators doesn't work

Section: 23.1 [lib.container.requirements]  Status: Review  Submitter: Judy Ward  Date: 2 Jul 1998

Currently the following will not compile on two well-known standard library implementations:

#include <set>
using namespace std;

void f(const set<int> &s)
{
  set<int>::iterator i;
  if (i==s.end()); // s.end() returns a const_iterator
}

The reason this doesn't compile is because operator== was implemented as a member function of the nested classes set:iterator and set::const_iterator, and there is no conversion from const_iterator to iterator. Surprisingly, (s.end() == i) does work, though, because of the conversion from iterator to const_iterator.

I don't see a requirement anywhere in the standard that this must work. Should there be one? If so, I think the requirement would need to be added to the tables in section 24.1.1. I'm not sure about the wording. If this requirement existed in the standard, I would think that implementors would have to make the comparison operators non-member functions.

This issues was also raised on comp.std.c++ by Darin Adler.  The example given was:

bool check_equal(std::deque<int>::iterator i,
std::deque<int>::const_iterator ci)
{
return i == ci;
}

Comment from John Potter:

In case nobody has noticed, accepting it will break reverse_iterator.

The fix is to make the comparison operators templated on two types.

    template <class Iterator1, class Iterator2>
    bool operator== (reverse_iterator<Iterator1> const& x,
                     reverse_iterator<Iterator2> const& y);
    

Obviously: return x.base() == y.base();

Currently, no reverse_iterator to const_reverse_iterator compares are valid.

BTW, I think the issue is in support of bad code. Compares should be between two iterators of the same type. All std::algorithms require the begin and end iterators to be of the same type.

Proposed resolution:

Insert this paragraph after 23.1 [lib.container.requirements] paragraph 7:

In the expressions

    i == j
    i != j
    i < j
    i <= j
    i >= j
    i > j
    i - j
  

Where i and j denote objects of a container's iterator type, either or both may be replaced by an object of the container's const_iterator type referring to the same element with no change in semantics.

[post-Toronto: Judy supplied a proposed resolution saying that iterator and const_iterator could be freely mixed in iterator comparison and difference operations.]

[Redmond: Dave and Howard supplied a new proposed resolution which explicitly listed expressions; there was concern that the previous proposed resolution was too informal.]

Rationale:

The LWG believes it is clear that the above wording applies only to the nested types X::iterator and X::const_iterator, where X is a container. There is no requirement that X::reverse_iterator and X::const_reverse_iterator can be mixed. If mixing them is considered important, that's a separate issue. (Issue 280.)


187. iter_swap underspecified

Section: 25.2.2 [lib.alg.swap]  Status: Open  Submitter: Andrew Koenig  Date: 14 Aug 1999

The description of iter_swap in 25.2.2 paragraph 7,says that it ``exchanges the values'' of the objects to which two iterators refer.

What it doesn't say is whether it does so using swap or using the assignment operator and copy constructor.

This question is an important one to answer, because swap is specialized to work efficiently for standard containers.
For example:

vector<int> v1, v2;
iter_swap(&v1, &v2);

Is this call to iter_swap equivalent to calling swap(v1, v2)?  Or is it equivalent to

{
vector<int> temp = v1;
v1 = v2;
v2 = temp;
}

The first alternative is O(1); the second is O(n).

A LWG member, Dave Abrahams, comments:

Not an objection necessarily, but I want to point out the cost of that requirement:

iter_swap(list<T>::iterator, list<T>::iterator)

can currently be specialized to be more efficient than iter_swap(T*,T*) for many T (by using splicing). Your proposal would make that optimization illegal. 

[Kona: The LWG notes the original need for iter_swap was proxy iterators which are no longer permitted.]

Proposed resolution:

Change the effect clause of iter_swap in 25.2.2 paragraph 7 from:

Exchanges the values pointed to by the two iterators a and b.

to

swap(*a, *b).

[post-Toronto: The LWG is concerned about possible overspecification: there may be cases, such as Dave Abrahams's example above, and such as vector<bool>'s iterators, where it makes more sense for iter_swap to do something other than swap. If performance is a concern, it may be better to have explicit complexity requirements than to say how iter_swap should be implemented.]

[Redmond: Discussed, with no consensus. There was very little support for the proposed resolution. Some people favored closing this issue as NAD. Others favored a more complicated specification of iter_swap, which might distinguish between ordinary iterators and proxies. A possible new issue: how do we know that the iterators passed to iter_swap have Assignable value types? (If this new issue is real, it extends far beyond just iter_swap.)]


197. max_size() underspecified

Section: 20.1.5 [lib.allocator.requirements], 23.1 [lib.container.requirements]  Status: Open  Submitter: Andy Sawyer  Date: 21 Oct 1999

Must the value returned by max_size() be unchanged from call to call?

Must the value returned from max_size() be meaningful?

Possible meanings identified in lib-6827:

1) The largest container the implementation can support given "best case" conditions - i.e. assume the run-time platform is "configured to the max", and no overhead from the program itself. This may possibly be determined at the point the library is written, but certainly no later than compile time.

2) The largest container the program could create, given "best case" conditions - i.e. same platform assumptions as (1), but take into account any overhead for executing the program itself. (or, roughly "storage=storage-sizeof(program)"). This does NOT include any resource allocated by the program. This may (or may not) be determinable at compile time.

3) The largest container the current execution of the program could create, given knowledge of the actual run-time platform, but again, not taking into account any currently allocated resource. This is probably best determined at program start-up.

4) The largest container the current execution program could create at the point max_size() is called (or more correctly at the point max_size() returns :-), given it's current environment (i.e. taking into account the actual currently available resources). This, obviously, has to be determined dynamically each time max_size() is called.

Proposed resolution:

Change 20.1.5 [lib.allocator.requirements] table 32 max_size() wording from:

      the largest value that can meaningfully be passed to X::allocate
to:
      the value of the largest constant expression (5.19 [expr.const]) that could ever meaningfully be passed to X::allocate

Change 23.1 [lib.container.requirements] table 65 max_size() wording from:

      size() of the largest possible container.
to:
      the value of the largest constant expression (5.19 [expr.const]) that could ever meaningfully be returned by X::size().

[Kona: The LWG informally discussed this and asked Andy Sawyer to submit an issue.]

[Tokyo: The LWG believes (1) above is the intended meaning.]

[Post-Tokyo: Beman Dawes supplied the above resolution at the request of the LWG. 21.3.3 [lib.string.capacity] was not changed because it references max_size() in 23.1. The term "compile-time" was avoided because it is not defined anywhere in the standard (even though it is used several places in the library clauses).]

[Copenhagen: Exactly what max_size means is still unclear. It may have a different meaning as a container member function than as an allocator member function. For the latter, it is probably best thought of as an architectural limit. Nathan will provide new wording.]


198. Validity of pointers and references unspecified after iterator destruction

Section: 24.1 [lib.iterator.requirements]  Status: Ready  Submitter: Beman Dawes  Date: 3 Nov 1999

Is a pointer or reference obtained from an iterator still valid after destruction of the iterator?

Is a pointer or reference obtained from an iterator still valid after the value of the iterator changes?

#include <iostream>
#include <vector>
#include <iterator>

int main()
{
    typedef std::vector<int> vec_t;
    vec_t v;
    v.push_back( 1 );

    // Is a pointer or reference obtained from an iterator still
    // valid after destruction of the iterator?
    int * p = &*v.begin();
    std::cout << *p << '\n';  // OK?

    // Is a pointer or reference obtained from an iterator still
    // valid after the value of the iterator changes?
    vec_t::iterator iter( v.begin() );
    p = &*iter++;
    std::cout << *p << '\n';  // OK?

    return 0;
}

The standard doesn't appear to directly address these questions. The standard needs to be clarified. At least two real-world cases have been reported where library implementors wasted considerable effort because of the lack of clarity in the standard. The question is important because requiring pointers and references to remain valid has the effect for practical purposes of prohibiting iterators from pointing to cached rather than actual elements of containers.

The standard itself assumes that pointers and references obtained from an iterator are still valid after iterator destruction or change. The definition of reverse_iterator::operator*(), 24.4.1.3.3 [lib.reverse.iter.op.star], which returns a reference, defines effects:

Iterator tmp = current;
return *--tmp;

The definition of reverse_iterator::operator->(), 24.4.1.3.4 [lib.reverse.iter.opref], which returns a pointer, defines effects:

return &(operator*());

Because the standard itself assumes pointers and references remain valid after iterator destruction or change, the standard should say so explicitly. This will also reduce the chance of user code breaking unexpectedly when porting to a different standard library implementation.

Proposed resolution:

Add a new paragraph to 24.1 [lib.iterator.requirements]:

Destruction of an iterator may invalidate pointers and references previously obtained from that iterator.

Replace paragraph 1 of 24.4.1.3.3 [lib.reverse.iter.op.star] with:

Effects:

  this->tmp = current;
  --this->tmp;
  return *this->tmp;

[Note: This operation must use an auxiliary member variable, rather than a temporary variable, to avoid returning a reference that persists beyond the lifetime of its associated iterator. (See 24.1 [lib.iterator.requirements].) The name of this member variable is shown for exposition only. --end note]

[Post-Tokyo: The issue has been reformulated purely in terms of iterators.]

[Pre-Toronto: Steve Cleary pointed out the no-invalidation assumption by reverse_iterator. The issue and proposed resolution was reformulated yet again to reflect this reality.]

[Copenhagen: Steve Cleary pointed out that reverse_iterator assumes its underlying iterator has persistent pointers and references. Andy Koenig pointed out that it is possible to rewrite reverse_iterator so that it no longer makes such an assupmption. However, this issue is related to issue 299. If we decide it is intentional that p[n] may return by value instead of reference when p is a Random Access Iterator, other changes in reverse_iterator will be necessary.]

Rationale:

This issue has been discussed extensively. Note that it is not an issue about the behavior of predefined iterators. It is asking whether or not user-defined iterators are permitted to have transient pointers and references. Several people presented examples of useful user-defined iterators that have such a property; examples include a B-tree iterator, and an "iota iterator" that doesn't point to memory. Library implementors already seem to be able to cope with such iterators: they take pains to avoid forming references to memory that gets iterated past. The only place where this is a problem is reverse_iterator, so this issue changes reverse_iterator to make it work.

This resolution does not weaken any guarantees provided by predefined iterators like list<int>::iterator. Clause 23 should be reviewed to make sure that guarantees for predefined iterators are as strong as users expect.


200. Forward iterator requirements don't allow constant iterators

Section: 24.1.3 [lib.forward.iterators]  Status: Review  Submitter: Matt Austern  Date: 19 Nov 1999

In table 74, the return type of the expression *a is given as T&, where T is the iterator's value type. For constant iterators, however, this is wrong. ("Value type" is never defined very precisely, but it is clear that the value type of, say, std::list<int>::const_iterator is supposed to be int, not const int.)

Proposed resolution:

In table 74, in the *a and *r++ rows, change the return type from "T&" to "T& if X is mutable, otherwise const T&". In the a->m row, change the return type from "U&" to "U& if X is mutable, otherwise const U&".

[Tokyo: The LWG believes this is the tip of a larger iceberg; there are multiple const problems with the STL portion of the library and that these should be addressed as a single package.  Note that issue 180 has already been declared NAD Future for that very reason.]

[Redmond: the LWG thinks this is separable from other constness issues. This issue is just cleanup; it clarifies language that was written before we had iterator_traits. Proposed resolution was modified: the original version only discussed *a. It was pointed out that we also need to worry about *r++ and a->m.]


201. Numeric limits terminology wrong

Section: 18.2.1 [lib.limits]  Status: Open  Submitter: Stephen Cleary  Date: 21 Dec 1999

In some places in this section, the terms "fundamental types" and "scalar types" are used when the term "arithmetic types" is intended. The current usage is incorrect because void is a fundamental type and pointers are scalar types, neither of which should have specializations of numeric_limits.

Proposed resolution:

Change 18.2 [lib.support.limits] para 1 from:

The headers <limits>, <climits>, and <cfloat> supply characteristics of implementation-dependent fundamental types (3.9.1).

to:

The headers <limits>, <climits>, and <cfloat> supply characteristics of implementation-dependent arithmetic types (3.9.1).

Change 18.2.1 [lib.limits] para 1 from:

The numeric_limits component provides a C++ program with information about various properties of the implementation's representation of the fundamental types.

to:

The numeric_limits component provides a C++ program with information about various properties of the implementation's representation of the arithmetic types.

Change 18.2.1 [lib.limits] para 2 from:

Specializations shall be provided for each fundamental type. . .

to:

Specializations shall be provided for each arithmetic type. . .

Change 18.2.1 [lib.limits] para 4 from:

Non-fundamental standard types. . .

to:

Non-arithmetic standard types. . .

Change 18.2.1.1 [lib.numeric.limits] para 1 from:

The member is_specialized makes it possible to distinguish between fundamental types, which have specializations, and non-scalar types, which do not.

to:

The member is_specialized makes it possible to distinguish between arithmetic types, which have specializations, and non-arithmetic types, which do not.

[post-Toronto: The opinion of the LWG is that the wording in the standard, as well as the wording of the proposed resolution, is flawed. The term "arithmetic types" is well defined in C and C++, and it is not clear that the term is being used correctly. It is also not clear that the term "implementation dependent" has any useful meaning in this context. The biggest problem is that numeric_limits seems to be intended both for built-in types and for user-defined types, and the standard doesn't make it clear how numeric_limits applies to each of those cases. A wholesale review of numeric_limits is needed. A paper would be welcome.]


202. unique() effects unclear when predicate not an equivalence relation

Section: 25.2.8 [lib.alg.unique]  Status: Review  Submitter: Andrew Koenig  Date: 13 Jan 2000

What should unique() do if you give it a predicate that is not an equivalence relation? There are at least two plausible answers:

1. You can't, because 25.2.8 says that it it "eliminates all but the first element from every consecutive group of equal elements..." and it wouldn't make sense to interpret "equal" as meaning anything but an equivalence relation. [It also doesn't make sense to interpret "equal" as meaning ==, because then there would never be any sense in giving a predicate as an argument at all.]

2. The word "equal" should be interpreted to mean whatever the predicate says, even if it is not an equivalence relation (and in particular, even if it is not transitive).

The example that raised this question is from Usenet:

int f[] = { 1, 3, 7, 1, 2 };
int* z = unique(f, f+5, greater<int>());

If one blindly applies the definition using the predicate greater<int>, and ignore the word "equal", you get:

Eliminates all but the first element from every consecutive group of elements referred to by the iterator i in the range [first, last) for which *i > *(i - 1).

The first surprise is the order of the comparison. If we wanted to allow for the predicate not being an equivalence relation, then we should surely compare elements the other way: pred(*(i - 1), *i). If we do that, then the description would seem to say: "Break the sequence into subsequences whose elements are in strictly increasing order, and keep only the first element of each subsequence". So the result would be 1, 1, 2. If we take the description at its word, it would seem to call for strictly DEcreasing order, in which case the result should be 1, 3, 7, 2.

In fact, the SGI implementation of unique() does neither: It yields 1, 3, 7.

Proposed resolution:

Change 25.2.8 [lib.alg.unique] paragraph 1 to:

For a nonempty range, eliminates all but the first element from every consecutive group of equivalent elements referred to by the iterator i in the range (first, last) for which the following conditions hold: *(i-1) == *i or pred(*(i-1), *i) != false.

Also insert a new paragraph, paragraph 2a, that reads: "Requires: The comparison function must be an equivalence relation."

[Redmond: discussed arguments for and against requiring the comparison function to be an equivalence relation. Straw poll: 14-2-5. First number is to require that it be an equivalence relation, second number is to explicitly not require that it be an equivalence relation, third number is people who believe they need more time to consider the issue. A separate issue: Andy Sawyer pointed out that "i-1" is incorrect, since "i" can refer to the first iterator in the range. Matt provided wording to address this problem.]

Rationale:

The LWG also considered an alternative resolution: change 25.2.8 [lib.alg.unique] paragraph 1 to:

For a nonempty range, eliminates all but the first element from every consecutive group of elements referred to by the iterator i in the range (first, last) for which the following conditions hold: *(i-1) == *i or pred(*(i-1), *i) != false.

Also insert a new paragraph, paragraph 1a, that reads: "Notes: The comparison function need not be an equivalence relation."

Informally: the proposed resolution imposes an explicit requirement that the comparison function be an equivalence relation. The alternative resolution does not, and it gives enough information so that the behavior of unique() for a non-equivalence relation is specified. Both resolutions are consistent with the behavior of existing implementations.


225. std:: algorithms use of other unqualified algorithms

Section: 17.4.4.3 [lib.global.functions]  Status: Open  Submitter: Dave Abrahams  Date: 01 Apr 2000

Are algorithms in std:: allowed to use other algorithms without qualification, so functions in user namespaces might be found through Koenig lookup?

For example, a popular standard library implementation includes this implementation of std::unique:

namespace std {
    template <class _ForwardIter>
    _ForwardIter unique(_ForwardIter __first, _ForwardIter __last) {
      __first = adjacent_find(__first, __last);
      return unique_copy(__first, __last, __first);
    }
    }

Imagine two users on opposite sides of town, each using unique on his own sequences bounded by my_iterators . User1 looks at his standard library implementation and says, "I know how to implement a more efficient unique_copy for my_iterators", and writes:

namespace user1 {
    class my_iterator;
    // faster version for my_iterator
    my_iterator unique_copy(my_iterator, my_iterator, my_iterator);
    }

user1::unique_copy() is selected by Koenig lookup, as he intended.

User2 has other needs, and writes:

namespace user2 {
    class my_iterator;
    // Returns true iff *c is a unique copy of *a and *b.
    bool unique_copy(my_iterator a, my_iterator b, my_iterator c);
    }

User2 is shocked to find later that his fully-qualified use of std::unique(user2::my_iterator, user2::my_iterator, user2::my_iterator) fails to compile (if he's lucky). Looking in the standard, he sees the following Effects clause for unique():

Effects: Eliminates all but the first element from every consecutive group of equal elements referred to by the iterator i in the range [first, last) for which the following corresponding conditions hold: *i == *(i - 1) or pred(*i, *(i - 1)) != false

The standard gives user2 absolutely no reason to think he can interfere with std::unique by defining names in namespace user2. His standard library has been built with the template export feature, so he is unable to inspect the implementation. User1 eventually compiles his code with another compiler, and his version of unique_copy silently stops being called. Eventually, he realizes that he was depending on an implementation detail of his library and had no right to expect his unique_copy() to be called portably.

On the face of it, and given above scenario, it may seem obvious that the implementation of unique() shown is non-conforming because it uses unique_copy() rather than ::std::unique_copy(). Most standard library implementations, however, seem to disagree with this notion.

[Tokyo:  Steve Adamczyk from the core working group indicates that "std::" is sufficient;  leading "::" qualification is not required because any namespace qualification is sufficient to suppress Koenig lookup.]

Proposed resolution:

Add a paragraph and a note at the end of 17.4.4.3 [lib.global.functions]:

Unless otherwise specified, no global or non-member function in the standard library shall use a function from another namespace which is found through argument-dependent name lookup (3.4.2 [basic.lookup.koenig]).

[Note: the phrase "unless otherwise specified" is intended to allow Koenig lookup in cases like that of ostream_iterators:

Effects:

*out_stream << value;
if(delim != 0) *out_stream << delim;
return (*this);

--end note]

[Tokyo: The LWG agrees that this is a defect in the standard, but is as yet unsure if the proposed resolution is the best solution. Furthermore, the LWG believes that the same problem of unqualified library names applies to wording in the standard itself, and has opened issue 229 accordingly. Any resolution of issue 225 should be coordinated with the resolution of issue 229.]

[Toronto: The LWG is not sure if this is a defect in the standard. Most LWG members believe that an implementation of std::unique like the one quoted in this issue is already illegal, since, under certain circumstances, its semantics are not those specified in the standard. The standard's description of unique does not say that overloading adjacent_find should have any effect.]


226. User supplied specializations or overloads of namespace std function templates

Section: 17.4.3.1 [lib.reserved.names]  Status: Open  Submitter: Dave Abrahams  Date: 01 Apr 2000

The issues are: 

1. How can a 3rd party library implementor (lib1) write a version of a standard algorithm which is specialized to work with his own class template? 

2. How can another library implementor (lib2) write a generic algorithm which will take advantage of the specialized algorithm in lib1?

This appears to be the only viable answer under current language rules:

namespace lib1
{
    // arbitrary-precision numbers using T as a basic unit
    template <class T>
    class big_num { //...
    };
    
    // defining this in namespace std is illegal (it would be an
    // overload), so we hope users will rely on Koenig lookup
    template <class T>
    void swap(big_int<T>&, big_int<T>&);
}
#include <algorithm>
namespace lib2
{
    template <class T>
    void generic_sort(T* start, T* end)
    {
            ...
        // using-declaration required so we can work on built-in types
        using std::swap;
        // use Koenig lookup to find specialized algorithm if available
        swap(*x, *y);
    }
}

This answer has some drawbacks. First of all, it makes writing lib2 difficult and somewhat slippery. The implementor needs to remember to write the using-declaration, or generic_sort will fail to compile when T is a built-in type. The second drawback is that the use of this style in lib2 effectively "reserves" names in any namespace which defines types which may eventually be used with lib2. This may seem innocuous at first when applied to names like swap, but consider more ambiguous names like unique_copy() instead. It is easy to imagine the user wanting to define these names differently in his own namespace. A definition with semantics incompatible with the standard library could cause serious problems (see issue 225).

Why, you may ask, can't we just partially specialize std::swap()? It's because the language doesn't allow for partial specialization of function templates. If you write:

namespace std
{
    template <class T>
    void swap(lib1::big_int<T>&, lib1::big_int<T>&);
}

You have just overloaded std::swap, which is illegal under the current language rules. On the other hand, the following full specialization is legal:

namespace std
{
    template <>
    void swap(lib1::other_type&, lib1::other_type&);
}

This issue reflects concerns raised by the "Namespace issue with specialized swap" thread on comp.lang.c++.moderated. A similar set of concerns was earlier raised on the boost.org mailing list and the ACCU-general mailing list. Also see library reflector message c++std-lib-7354.

Proposed resolution:

[Tokyo: Summary, "There is no conforming way to extend std::swap for user defined templates."  The LWG agrees that there is a problem.  Would like more information before proceeding. This may be a core issue. Core issue 229 has been opened to discuss the core aspects of this problem. It was also noted that submissions regarding this issue have been received from several sources, but too late to be integrated into the issues list. ]

[Post-Tokyo: A paper with several proposed resolutions, J16/00-0029==WG21/N1252, "Shades of namespace std functions " by Alan Griffiths, is in the Post-Tokyo mailing. It should be considered a part of this issue.]

[Toronto: Dave Abrahams and Peter Dimov have proposed a resolution that involves core changes: it would add partial specialization of function template. The Core Working Group is reluctant to add partial specialization of function templates. It is viewed as a large change, CWG believes that proposal presented leaves some syntactic issues unanswered; if the CWG does add partial specialization of function templates, it wishes to develop its own proposal. The LWG continues to believe that there is a serious problem: there is no good way for users to force the library to use user specializations of generic standard library functions, and in certain cases (e.g. transcendental functions called by valarray and complex) this is important. Koenig lookup isn't adequate, since names within the library must be qualified with std (see issue 225), specialization doesn't work (we don't have partial specialization of function templates), and users aren't permitted to add overloads within namespace std. ]

[Copenhagen: Discussed at length, with no consensus. Relevant papers in the pre-Copenhagen mailing: N1289, N1295, N1296. Discussion focused on four options. (1) Relax restrictions on overloads within namespace std. (2) Mandate that the standard library use unqualified calls for swap and possibly other functions. (3) Introduce helper class templates for swap and possibly other functions. (4) Introduce partial specialization of function templates. Every option had both support and opposition. Straw poll (first number is support, second is strongly opposed): (1) 6, 4; (2) 6, 7; (3) 3, 8; (4) 4, 4.]

[Redmond: Discussed, again no consensus. Herb presented an argument that a user who is defining a type T with an associated swap should not be expected to put that swap in namespace std, either by overloading or by partial specialization. The argument is that swap is part of T's interface, and thus should to in the same namespace as T and only in that namespace. If we accept this argument, the consequence is that standard library functions should use unqualified call of swap. (And which other functions? Any?) A small group (Nathan, Howard, Jeremy, Dave, Matt, Walter, Marc) will try to put together a proposal before the next meeting.]


229. Unqualified references of other library entities

Section: 17.4.1.1 [lib.contents]  Status: Open  Submitter: Steve Clamage  Date: 19 Apr 2000

Throughout the library chapters, the descriptions of library entities refer to other library entities without necessarily qualifying the names.

For example, section 25.2.2 "Swap" describes the effect of swap_ranges in terms of the unqualified name "swap". This section could reasonably be interpreted to mean that the library must be implemented so as to do a lookup of the unqualified name "swap", allowing users to override any ::std::swap function when Koenig lookup applies.

Although it would have been best to use explicit qualification with "::std::" throughout, too many lines in the standard would have to be adjusted to make that change in a Technical Corrigendum.

Issue 182, which addresses qualification of size_t, is a special case of this.

Proposed resolution:

To section 17.4.1.1 "Library contents" Add the following paragraph:

Whenever a name x defined in the standard library is mentioned, the name x is assumed to be fully qualified as ::std::x, unless explicitly described otherwise. For example, if the Effects section for library function F is described as calling library function G, the function ::std::G is meant.

[Post-Tokyo: Steve Clamage submitted this issue at the request of the LWG to solve a problem in the standard itself similar to the problem within implementations of library identified by issue 225. Any resolution of issue 225 should be coordinated with the resolution of this issue.]

[post-Toronto: Howard is undecided about whether it is appropriate for all standard library function names referred to in other standard library functions to be explicitly qualified by std: it is common advice that users should define global functions that operate on their class in the same namespace as the class, and this requires argument-dependent lookup if those functions are intended to be called by library code. Several LWG members are concerned that valarray appears to require argument-dependent lookup, but that the wording may not be clear enough to fall under "unless explicitly described otherwise".]


231. Precision in iostream?

Section: 22.2.2.2.2 [lib.facet.num.put.virtuals]  Status: Ready  Submitter: James Kanze, Stephen Clamage  Date:  25 Apr 2000

What is the following program supposed to output?

#include <iostream>

    int
    main()
    {
        std::cout.setf( std::ios::scientific , std::ios::floatfield ) ;
        std::cout.precision( 0 ) ;
        std::cout << 1.00 << '\n' ;
        return 0 ;
    }

From my C experience, I would expect "1e+00"; this is what printf("%.0e" , 1.00 ); does. G++ outputs "1.000000e+00".

The only indication I can find in the standard is 22.2.2.2.2/11, where it says "For conversion from a floating-point type, if (flags & fixed) != 0 or if str.precision() > 0, then str.precision() is specified in the conversion specification." This is an obvious error, however, fixed is not a mask for a field, but a value that a multi-bit field may take -- the results of and'ing fmtflags with ios::fixed are not defined, at least not if ios::scientific has been set. G++'s behavior corresponds to what might happen if you do use (flags & fixed) != 0 with a typical implementation (floatfield == 3 << something, fixed == 1 << something, and scientific == 2 << something).

Presumably, the intent is either (flags & floatfield) != 0, or (flags & floatfield) == fixed; the first gives something more or less like the effect of precision in a printf floating point conversion. Only more or less, of course. In order to implement printf formatting correctly, you must know whether the precision was explicitly set or not. Say by initializing it to -1, instead of 6, and stating that for floating point conversions, if precision < -1, 6 will be used, for fixed point, if precision < -1, 1 will be used, etc. Plus, of course, if precision == 0 and flags & floatfield == 0, 1 should be = used. But it probably isn't necessary to emulate all of the anomalies of printf:-).

Proposed resolution:

In 22.2.2.2.2 [lib.facet.num.put.virtuals], paragraph 11, change "if (flags & fixed) != 0" to "if (flags & floatfield) == fixed || (flags & floatfield) == scientific"

Rationale:

The floatfield determines whether numbers are formatted as if with %f, %e, or %g. If the fixed bit is set, it's %f, if scientific it's %e, and if both bits are set, or neither, it's %e.

Turning to the C standard, a precision of 0 is meaningful for %f and %e, but not for %g: for %g, precision 0 is taken to be the same as precision 1.

The proposed resolution has the effect that the output of the above program will be "1e+00".


233. Insertion hints in associative containers

Section: 23.1.2 [lib.associative.reqmts]  Status: Review  Submitter: Andrew Koenig  Date: 30 Apr 2000

If mm is a multimap and p is an iterator into the multimap, then mm.insert(p, x) inserts x into mm with p as a hint as to where it should go. Table 69 claims that the execution time is amortized constant if the insert winds up taking place adjacent to p, but does not say when, if ever, this is guaranteed to happen. All it says it that p is a hint as to where to insert.

The question is whether there is any guarantee about the relationship between p and the insertion point, and, if so, what it is.

I believe the present state is that there is no guarantee: The user can supply p, and the implementation is allowed to disregard it entirely.

Additional comments from Nathan:
The vote [in Redmond] was on whether to elaborately specify the use of the hint, or to require behavior only if the value could be inserted adjacent to the hint. I would like to ensure that we have a chance to vote for a deterministic treatment: "before, if possible, otherwise after, otherwise anywhere appropriate", as an alternative to the proposed "before or after, if possible, otherwise [...]".

Proposed resolution:

In table 69 "Associative Container Requirements" in 23.1.2 [lib.associative.reqmts], in the row for a.insert(p, t), change

iterator p is a hint pointing to where the insert should start to search.

to

insertion adjacent to iterator p is preferred if more than one insertion point is valid.

and change

logarithmic in general, but amortized constant if t is inserted right after p.

to

logarithmic in general, but amortized constant if t is inserted adjacent to iterator p.

[Toronto: there was general agreement that this is a real defect: when inserting an element x into a multiset that already contains several copies of x, there is no way to know whether the hint will be used. There was some support for an alternative resolution: we check on both sides of the hint (both before and after, in that order). If either is the correct location, the hint is used; otherwise it is not. This would be different from the original proposed resolution, because in the proposed resolution the hint will be used even if it is very far from the insertion point. JC van Winkel supplied precise wording for both options.]

[Copenhagen: the LWG looked at both options, and preferred the original. This preference is contingent on seeing a reference implementation showing that it is possible to implement this requirement without loss of efficiency.]

[Redmond: The LWG was reluctant to adopt the proposal that emerged from Copenhagen: it seemed excessively complicated, and went beyond fixing the defect that we identified in Toronto. PJP provided the new wording described in this issue. Nathan agrees that we shouldn't adopt the more detailed semantics, and notes: "we know that you can do it efficiently enough with a red-black tree, but there are other (perhaps better) balanced tree techniques that might differ enough to make the detailed semantics hard to satisfy."]


239. Complexity of unique() and/or unique_copy incorrect

Section: 25.2.8 [lib.alg.unique]  Status: Review  Submitter: Angelika Langer  Date: May 15 2000

The complexity of unique and unique_copy are inconsistent with each other and inconsistent with the implementations.  The standard specifies:

for unique():

-3- Complexity: If the range (last - first) is not empty, exactly (last - first) - 1 applications of the corresponding predicate, otherwise no applications of the predicate.

for unique_copy():

-7- Complexity: Exactly last - first applications of the corresponding predicate.

The implementations do it the other way round: unique() applies the predicate last-first times and unique_copy() applies it last-first-1 times.

As both algorithms use the predicate for pair-wise comparison of sequence elements I don't see a justification for unique_copy() applying the predicate last-first times, especially since it is not specified to which pair in the sequence the predicate is applied twice.

Proposed resolution:

Change both complexity sections in 25.2.8 [lib.alg.unique] to:

Complexity: For nonempty ranges, exactly last - first - 1 applications of the corresponding predicate.

240. Complexity of adjacent_find() is meaningless

Section: 25.1.5 [lib.alg.adjacent.find]  Status: Ready  Submitter: Angelika Langer  Date: May 15 2000

The complexity section of adjacent_find is defective:

template <class ForwardIterator>
ForwardIterator adjacent_find(ForwardIterator first, ForwardIterator last
                              BinaryPredicate pred);

-1- Returns: The first iterator i such that both i and i + 1 are in the range [first, last) for which the following corresponding conditions hold: *i == *(i + 1), pred(*i, *(i + 1)) != false. Returns last if no such iterator is found.

-2- Complexity: Exactly find(first, last, value) - first applications of the corresponding predicate.

In the Complexity section, it is not defined what "value" is supposed to mean. My best guess is that "value" means an object for which one of the conditions pred(*i,value) or pred(value,*i) is true, where i is the iterator defined in the Returns section. However, the value type of the input sequence need not be equality-comparable and for this reason the term find(first, last, value) - first is meaningless.

A term such as find_if(first, last, bind2nd(pred,*i)) - first or find_if(first, last, bind1st(pred,*i)) - first might come closer to the intended specification. Binders can only be applied to function objects that have the function call operator declared const, which is not required of predicates because they can have non-const data members. For this reason, a specification using a binder could only be an "as-if" specification.

Proposed resolution:

Change the complexity section in 25.1.5 [lib.alg.adjacent.find] to:

For a nonempty range, exactly min((i - first) + 1, (last - first) - 1) applications of the corresponding predicate, where i is adjacent_find's return value.

[Copenhagen: the original resolution specified an upper bound. The LWG preferred an exact count.]


241. Does unique_copy() require CopyConstructible and Assignable?

Section: 25.2.8 [lib.alg.unique]  Status: Review  Submitter: Angelika Langer  Date: May 15 2000

Some popular implementations of unique_copy() create temporary copies of values in the input sequence, at least if the input iterator is a pointer. Such an implementation is built on the assumption that the value type is CopyConstructible and Assignable.

It is common practice in the standard that algorithms explicitly specify any additional requirements that they impose on any of the types used by the algorithm. An example of an algorithm that creates temporary copies and correctly specifies the additional requirements is accumulate(), 26.4.1 [lib.accumulate].

Since the specifications of unique() and unique_copy() do not require CopyConstructible and Assignable of the InputIterator's value type the above mentioned implementations are not standard-compliant. I cannot judge whether this is a defect in the standard or a defect in the implementations.

Proposed resolution:

In 25.2.8 change:

-4- Requires: The ranges [first, last) and [result, result+(last-first)) shall not overlap.

to:

-4- Requires: The ranges [first, last) and [result, result+(last-first)) shall not overlap. The expression *result = *first must be valid. If both InputIterator and OutputIterator do not meet the requirements of forward iterator then the value type of InputIterator must be copy constructible. Otherwise copy constructible is not required.

[Redmond: the original proposed resolution didn't impose an explicit requirement that the iterator's value type must be copy constructible, on the grounds that an input iterator's value type must always be copy constructible. Not everyone in the LWG thought that this requirement was clear from table 72. It has been suggested that it might be possible to implement unique_copy without requiring assignability, although current implementations do impose that requirement. Howard provided new wording.]


247. vector, deque::insert complexity

Section: 23.2.4.3 [lib.vector.modifiers]  Status: Open  Submitter: Lisa Lippincott  Date: 06 June 2000

Paragraph 2 of 23.2.4.3 [lib.vector.modifiers] describes the complexity of vector::insert:

Complexity: If first and last are forward iterators, bidirectional iterators, or random access iterators, the complexity is linear in the number of elements in the range [first, last) plus the distance to the end of the vector. If they are input iterators, the complexity is proportional to the number of elements in the range [first, last) times the distance to the end of the vector.

First, this fails to address the non-iterator forms of insert.

Second, the complexity for input iterators misses an edge case -- it requires that an arbitrary number of elements can be added at the end of a vector in constant time.

At the risk of strengthening the requirement, I suggest simply

Complexity: The complexity is linear in the number of elements inserted plus the distance to the end of the vector.

For input iterators, one may achieve this complexity by first inserting at the end of the vector, and then using rotate.

I looked to see if deque had a similar problem, and was surprised to find that deque places no requirement on the complexity of inserting multiple elements (23.2.1.3 [lib.deque.modifiers], paragraph 3):

Complexity: In the worst case, inserting a single element into a deque takes time linear in the minimum of the distance from the insertion point to the beginning of the deque and the distance from the insertion point to the end of the deque. Inserting a single element either at the beginning or end of a deque always takes constant time and causes a single call to the copy constructor of T.

I suggest:

Complexity: The complexity is linear in the number of elements inserted plus the shorter of the distances to the beginning and end of the deque. Inserting a single element at either the beginning or the end of a deque causes a single call to the copy constructor of T.

Proposed resolution:

[Toronto: It's agreed that there is a defect in complexity of multi-element insert for vector and deque. For vector, the complexity should probably be something along the lines of c1 * N + c2 * distance(i, end()). However, there is some concern about whether it is reasonable to amortize away the copies that we get from a reallocation whenever we exceed the vector's capacity. For deque, the situation is somewhat less clear. Deque is notoriously complicated, and we may not want to impose complexity requirements that would imply any implementation technique more complicated than a while loop whose body is a single-element insert.]


253. valarray helper functions are almost entirely useless

Section: 26.3.2.1 [lib.valarray.cons], 26.3.2.2 [lib.valarray.assign]  Status: Review  Submitter: Robert Klarer  Date: 31 Jul 2000

This discussion is adapted from message c++std-lib-7056 posted November 11, 1999. I don't think that anyone can reasonably claim that the problem described below is NAD.

These valarray constructors can never be called:

   template <class T>
         valarray<T>::valarray(const slice_array<T> &);
   template <class T>
         valarray<T>::valarray(const gslice_array<T> &);
   template <class T>
         valarray<T>::valarray(const mask_array<T> &);
   template <class T>
         valarray<T>::valarray(const indirect_array<T> &);

Similarly, these valarray assignment operators cannot be called:

     template <class T>
     valarray<T> valarray<T>::operator=(const slice_array<T> &);
     template <class T>
     valarray<T> valarray<T>::operator=(const gslice_array<T> &);
     template <class T>
     valarray<T> valarray<T>::operator=(const mask_array<T> &);
     template <class T>
     valarray<T> valarray<T>::operator=(const indirect_array<T> &);

Please consider the following example:

   #include <valarray>
   using namespace std;

   int main()
   {
       valarray<double> va1(12);
       valarray<double> va2(va1[slice(1,4,3)]); // line 1
   }

Since the valarray va1 is non-const, the result of the sub-expression va1[slice(1,4,3)] at line 1 is an rvalue of type const std::slice_array<double>. This slice_array rvalue is then used to construct va2. The constructor that is used to construct va2 is declared like this:

     template <class T>
     valarray<T>::valarray(const slice_array<T> &);

Notice the constructor's const reference parameter. When the constructor is called, a slice_array must be bound to this reference. The rules for binding an rvalue to a const reference are in 8.5.3, paragraph 5 (see also 13.3.3.1.4). Specifically, paragraph 5 indicates that a second slice_array rvalue is constructed (in this case copy-constructed) from the first one; it is this second rvalue that is bound to the reference parameter. Paragraph 5 also requires that the constructor that is used for this purpose be callable, regardless of whether the second rvalue is elided. The copy-constructor in this case is not callable, however, because it is private. Therefore, the compiler should report an error.

Since slice_arrays are always rvalues, the valarray constructor that has a parameter of type const slice_array<T> & can never be called. The same reasoning applies to the three other constructors and the four assignment operators that are listed at the beginning of this post. Furthermore, since these functions cannot be called, the valarray helper classes are almost entirely useless.

Proposed resolution:

slice_array:

gslice_array:

mask_array:

indirect_array:

[This wording is taken from Robert Klarer's reflector message, c++std-lib-7827. Gabriel Dos Reis agrees that this general solution is correct.]

Rationale:

Keeping the valarray constructors private is untenable. Merely making valarray a friend of the helper classes isn't good enough, because access to the copy constructor is checked in the user's environment.

Making the assignment operator public is not strictly necessary to solve this problem. A majority of the LWG (straw poll: 13-4) believed we should make the assignment operators public, in addition to the copy constructors, for reasons of symmetry and user expectation.


254. Exception types in clause 19 are constructed from std::string

Section: 19.1 [lib.std.exceptions]  Status: Open  Submitter: Dave Abrahams  Date: 01 Aug 2000

Many of the standard exception types which implementations are required to throw are constructed with a const std::string& parameter. For example:

     19.1.5  Class out_of_range                          [lib.out.of.range]
     namespace std {
       class out_of_range : public logic_error {
       public:
         explicit out_of_range(const string& what_arg);
       };
     }

   1 The class out_of_range defines the type of objects  thrown  as  excep-
     tions to report an argument value not in its expected range.

     out_of_range(const string& what_arg);

     Effects:
       Constructs an object of class out_of_range.
     Postcondition:
       strcmp(what(), what_arg.c_str()) == 0.

There are at least two problems with this:

  1. A program which is low on memory may end up throwing std::bad_alloc instead of out_of_range because memory runs out while constructing the exception object.
  2. An obvious implementation which stores a std::string data member may end up invoking terminate() during exception unwinding because the exception object allocates memory (or rather fails to) as it is being copied.

There may be no cure for (1) other than changing the interface to out_of_range, though one could reasonably argue that (1) is not a defect. Personally I don't care that much if out-of-memory is reported when I only have 20 bytes left, in the case when out_of_range would have been reported. People who use exception-specifications might care a lot, though.

There is a cure for (2), but it isn't completely obvious. I think a note for implementors should be made in the standard. Avoiding possible termination in this case shouldn't be left up to chance. The cure is to use a reference-counted "string" implementation in the exception object. I am not necessarily referring to a std::string here; any simple reference-counting scheme for a NTBS would do.

Further discussion, in email:

...I'm not so concerned about (1). After all, a library implementation can add const char* constructors as an extension, and users don't need to avail themselves of the standard exceptions, though this is a lame position to be forced into. FWIW, std::exception and std::bad_alloc don't require a temporary basic_string.

...I don't think the fixed-size buffer is a solution to the problem, strictly speaking, because you can't satisfy the postcondition
  strcmp(what(), what_arg.c_str()) == 0
For all values of what_arg (i.e. very long values). That means that the only truly conforming solution requires a dynamic allocation.

Further discussion, from Redmond:

The most important progress we made at the Redmond meeting was realizing that there are two separable issues here: the const string& constructor, and the copy constructor. If a user writes something like throw std::out_of_range("foo"), the const string& constructor is invoked before anything gets thrown. The copy constructor is potentially invoked during stack unwinding.

The copy constructor is a more serious problem, becuase failure during stack unwinding invokes terminate. The copy constructor must be nothrow.

The fundamental problem is that it's difficult to get the nothrow requirement to work well with the requirement that the exception objects store a string of unbounded size, particularly if you also try to make the const string& constructor nothrow. Options discussed include:

(Not all of these options are mutually exclusive.)

Proposed resolution:

[Toronto: some LWG members thought this was merely a QoI issue, but most believed that it was at least a borderline defect. There was more support for nonnormative advice to implementors than for a normative change.]

[Redmond: discussed, without definite conclusion. Most LWG members thought there was a real defect lurking here. A small group (Herb, Kevlin, Howard, Martin, Dave) will try to make a recommendation.]


258. Missing allocator requirement

Section: 20.1.5 [lib.allocator.requirements]  Status: Open  Submitter: Matt Austern  Date: 22 Aug 2000

From lib-7752:

I've been assuming (and probably everyone else has been assuming) that allocator instances have a particular property, and I don't think that property can be deduced from anything in Table 32.

I think we have to assume that allocator type conversion is a homomorphism. That is, if x1 and x2 are of type X, where X::value_type is T, and if type Y is X::template rebind<U>::other, then Y(x1) == Y(x2) if and only if x1 == x2.

Further discussion: Howard Hinnant writes, in lib-7757:

I think I can prove that this is not provable by Table 32. And I agree it needs to be true except for the "and only if". If x1 != x2, I see no reason why it can't be true that Y(x1) == Y(x2). Admittedly I can't think of a practical instance where this would happen, or be valuable. But I also don't see a need to add that extra restriction. I think we only need:

if (x1 == x2) then Y(x1) == Y(x2)

If we decide that == on allocators is transitive, then I think I can prove the above. But I don't think == is necessarily transitive on allocators. That is:

Given x1 == x2 and x2 == x3, this does not mean x1 == x3.

Example:

x1 can deallocate pointers from: x1, x2, x3
x2 can deallocate pointers from: x1, x2, x4
x3 can deallocate pointers from: x1, x3
x4 can deallocate pointers from: x2, x4

x1 == x2, and x2 == x4, but x1 != x4

Proposed resolution:

[Toronto: LWG members offered multiple opinions. One opinion is that it should not be required that x1 == x2 implies Y(x1) == Y(x2), and that it should not even be required that X(x1) == x1. Another opinion is that the second line from the bottom in table 32 already implies the desired property. This issue should be considered in light of other issues related to allocator instances.]


270. Binary search requirements overly strict

Section: 25.3.3 [lib.alg.binary.search]  Status: Ready  Submitter: Matt Austern  Date: 18 Oct 2000

Each of the four binary search algorithms (lower_bound, upper_bound, equal_range, binary_search) has a form that allows the user to pass a comparison function object. According to 25.3, paragraph 2, that comparison function object has to be a strict weak ordering.

This requirement is slightly too strict. Suppose we are searching through a sequence containing objects of type X, where X is some large record with an integer key. We might reasonably want to look up a record by key, in which case we would want to write something like this:

    struct key_comp {
      bool operator()(const X& x, int n) const {
        return x.key() < n;
      }
    }

    std::lower_bound(first, last, 47, key_comp());

key_comp is not a strict weak ordering, but there is no reason to prohibit its use in lower_bound.

There's no difficulty in implementing lower_bound so that it allows the use of something like key_comp. (It will probably work unless an implementor takes special pains to forbid it.) What's difficult is formulating language in the standard to specify what kind of comparison function is acceptable. We need a notion that's slightly more general than that of a strict weak ordering, one that can encompass a comparison function that involves different types. Expressing that notion may be complicated.

Additional questions raised at the Toronto meeting:

Additional discussion from Copenhagen:

Proposed resolution:

Change 25.3 [lib.alg.sorting] paragraph 3 from:

3 For all algorithms that take Compare, there is a version that uses operator< instead. That is, comp(*i, *j) != false defaults to *i < *j != false. For the algorithms to work correctly, comp has to induce a strict weak ordering on the values.

to:

3 For all algorithms that take Compare, there is a version that uses operator< instead. That is, comp(*i, *j) != false defaults to *i < *j != false. For algorithms other than those described in lib.alg.binary.search (25.3.3) to work correctly, comp has to induce a strict weak ordering on the values.

Add the following paragraph after 25.3 [lib.alg.sorting] paragraph 5:

-6- A sequence [start, finish) is partitioned with respect to an expression f(e) if there exists an integer n such that for all 0 <= i < distance(start, finish), f(*(begin+i)) is true if and only if i < n.

Change 25.3.3 [lib.alg.binary.search] paragraph 1 from:

-1- All of the algorithms in this section are versions of binary search and assume that the sequence being searched is in order according to the implied or explicit comparison function. They work on non-random access iterators minimizing the number of comparisons, which will be logarithmic for all types of iterators. They are especially appropriate for random access iterators, because these algorithms do a logarithmic number of steps through the data structure. For non-random access iterators they execute a linear number of steps.

to:

-1- All of the algorithms in this section are versions of binary search and assume that the sequence being searched is partitioned with respect to an expression formed by binding the search key to an argument of the implied or explicit comparison function. They work on non-random access iterators minimizing the number of comparisons, which will be logarithmic for all types of iterators. They are especially appropriate for random access iterators, because these algorithms do a logarithmic number of steps through the data structure. For non-random access iterators they execute a linear number of steps.

Change 25.3.3.1 [lib.lower.bound] paragraph 1 from:

-1- Requires: Type T is LessThanComparable (lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with respect to the expression e < value or comp(e, value)

Remove 25.3.3.1 [lib.lower.bound] paragraph 2:

-2- Effects: Finds the first position into which value can be inserted without violating the ordering.

Change 25.3.3.2 [lib.upper.bound] paragraph 1 from:

-1- Requires: Type T is LessThanComparable (lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with respect to the expression !(value < e) or !comp(value, e)

Remove 25.3.3.2 [lib.upper.bound] paragraph 2:

-2- Effects: Finds the furthermost position into which value can be inserted without violating the ordering.

Change 25.3.3.3 [lib.equal.range] paragraph 1 from:

-1- Requires: Type T is LessThanComparable (lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with respect to the expressions e < value and !(value < e) or comp(e, value) and !comp(value, e). Also, for all elements e of [first, last), e < value implies !(value < e) or comp(e, value) implies !comp(value, e)

Change 25.3.3.3 [lib.equal.range] paragraph 2 from:

-2- Effects: Finds the largest subrange [i, j) such that the value can be inserted at any iterator k in it without violating the ordering. k satisfies the corresponding conditions: !(*k < value) && !(value < *k) or comp(*k, value) == false && comp(value, *k) == false.

to:

   -2- Returns: 
         make_pair(lower_bound(first, last, value),
                   upper_bound(first, last, value))
       or
         make_pair(lower_bound(first, last, value, comp),
                   upper_bound(first, last, value, comp))

Change 25.3.3.3 [lib.binary.search] paragraph 1 from:

-1- Requires: Type T is LessThanComparable (lib.lessthancomparable).

to:

-1- Requires: The elements e of [first, last) are partitioned with respect to the expressions e < value and !(value < e) or comp(e, value) and !comp(value, e). Also, for all elements e of [first, last), e < value implies !(value < e) or comp(e, value) implies !comp(value, e)

[Copenhagen: Dave Abrahams provided this wording]

[Redmond: Minor changes in wording. (Removed "non-negative", and changed the "other than those described in" wording.) Also, the LWG decided to accept the "optional" part.]

Rationale:

The proposed resolution reinterprets binary search. Instead of thinking about searching for a value in a sorted range, we view that as an important special case of a more general algorithm: searching for the partition point in a partitioned range.

We also add a guarantee that the old wording did not: we ensure that the upper bound is no earlier than the lower bound, that the pair returned by equal_range is a valid range, and that the first part of that pair is the lower bound.


274. a missing/impossible allocator requirement

Section: 20.1.5 [lib.allocator.requirements]  Status: Ready  Submitter: Martin Sebor  Date: 02 Nov 2000

I see that table 31 in 20.1.5, p3 allows T in std::allocator<T> to be of any type. But the synopsis in 20.4.1 calls for allocator<>::address() to be overloaded on reference and const_reference, which is ill-formed for all T = const U. In other words, this won't work:

template class std::allocator<const int>;

The obvious solution is to disallow specializations of allocators on const types. However, while containers' elements are required to be assignable (which rules out specializations on const T's), I think that allocators might perhaps be potentially useful for const values in other contexts. So if allocators are to allow const types a partial specialization of std::allocator<const T> would probably have to be provided.

Proposed resolution:

Change the text in row 1, column 2 of table 32 in 20.1.5, p3 from

any type

to

any non-const, non-reference type

[Redmond: previous proposed resolution was "any non-const, non-volatile, non-reference type". Got rid of the "non-volatile".]

Rationale:

Two resolutions were originally proposed: one that partially specialized std::allocator for const types, and one that said an allocator's value type may not be const. The LWG chose the second. The first wouldn't be appropriate, because allocators are intended for use by containers, and const value types don't work in containers. Encouraging the use of allocators with const value types would only lead to unsafe code.

The original text for proposed resolution 2 was modified so that it also forbids volatile types and reference types.


276. Assignable requirement for container value type overly strict

Section: 23.1 [lib.container.requirements]  Status: Ready  Submitter: Peter Dimov  Date: 07 Nov 2000

23.1/3 states that the objects stored in a container must be Assignable. 23.3.1 [lib.map], paragraph 2, states that map satisfies all requirements for a container, while in the same time defining value_type as pair<const Key, T> - a type that is not Assignable.

It should be noted that there exists a valid and non-contradictory interpretation of the current text. The wording in 23.1/3 avoids mentioning value_type, referring instead to "objects stored in a container." One might argue that map does not store objects of type map::value_type, but of map::mapped_type instead, and that the Assignable requirement applies to map::mapped_type, not map::value_type.

However, this makes map a special case (other containers store objects of type value_type) and the Assignable requirement is needlessly restrictive in general.

For example, the proposed resolution of active library issue 103 is to make set::iterator a constant iterator; this means that no set operations can exploit the fact that the stored objects are Assignable.

This is related to, but slightly broader than, closed issue 140.

Proposed resolution:

23.1/3: Strike the trailing part of the sentence:

, and the additional requirements of Assignable types from 23.1/3

so that it reads:

-3- The type of objects stored in these components must meet the requirements of CopyConstructible types (lib.copyconstructible).

23.1/4: Modify to make clear that this requirement is not for all containers. Change to:

-4- Table 64 defines the Assignable requirement. Some containers require this property of the types to be stored in the container. T is the type used to instantiate the container. t is a value of T, and u is a value of (possibly const) T.

23.1, Table 65: in the first row, change "T is Assignable" to "T is CopyConstructible".

23.2.1/2: Add sentence for Assignable requirement. Change to:

-2- A deque satisfies all of the requirements of a container and of a reversible container (given in tables in lib.container.requirements) and of a sequence, including the optional sequence requirements (lib.sequence.reqmts). In addition to the requirements on the stored object described in 23.1[lib.container.requirements], the stored object must also meet the requirements of Assignable. Descriptions are provided here only for operations on deque that are not described in one of these tables or for operations where there is additional semantic information.

23.2.2/2: Add Assignable requirement to specific methods of list. Change to:

-2- A list satisfies all of the requirements of a container and of a reversible container (given in two tables in lib.container.requirements) and of a sequence, including most of the the optional sequence requirements (lib.sequence.reqmts). The exceptions are the operator[] and at member functions, which are not provided. [Footnote: These member functions are only provided by containers whose iterators are random access iterators. --- end foonote]

list does not require the stored type T to be Assignable unless the following methods are instantiated: [Footnote: Implementors are permitted but not required to take advantage of T's Assignable properties for these methods. -- end foonote]

     list<T,Allocator>& operator=(const list<T,Allocator>&  x );
     template <class InputIterator>
       void assign(InputIterator first, InputIterator last);
     void assign(size_type n, const T& t);

Descriptions are provided here only for operations on list that are not described in one of these tables or for operations where there is additional semantic information.

23.2.4/2: Add sentence for Assignable requirement. Change to:

-2- A vector satisfies all of the requirements of a container and of a reversible container (given in two tables in lib.container.requirements) and of a sequence, including most of the optional sequence requirements (lib.sequence.reqmts). The exceptions are the push_front and pop_front member functions, which are not provided. In addition to the requirements on the stored object described in 23.1[lib.container.requirements], the stored object must also meet the requirements of Assignable. Descriptions are provided here only for operations on vector that are not described in one of these tables or for operations where there is additional semantic information.

Rationale:

list, set, multiset, map, multimap are able to store non-Assignables. However, there is some concern about list<T>: although in general there's no reason for T to be Assignable, some implementations of the member functions operator= and assign do rely on that requirement. The LWG does not want to forbid such implementations.

Note that the type stored in a standard container must still satisfy the requirements of the container's allocator; this rules out, for example, such types as "const int". See issue 274 for more details.

In principle we could also relax the "Assignable" requirement for individual vector member functions, such as push_back. However, the LWG did not see great value in such selective relaxation. Doing so would remove implementors' freedom to implement vector::push_back in terms of vector::insert.


278. What does iterator validity mean?

Section: 23.2.2.4 [lib.list.ops]  Status: Review  Submitter: P.J. Plauger  Date: 27 Nov 2000

Section 23.2.2.4 [lib.list.ops] states that

  void splice(iterator position, list<T, Allocator>& x);

invalidates all iterators and references to list x.

But what does the C++ Standard mean by "invalidate"? You can still dereference the iterator to a spliced list element, but you'd better not use it to delimit a range within the original list. For the latter operation, it has definitely lost some of its validity.

If we accept the proposed resolution to issue 250, then we'd better clarify that a "valid" iterator need no longer designate an element within the same container as it once did. We then have to clarify what we mean by invalidating a past-the-end iterator, as when a vector or string grows by reallocation. Clearly, such an iterator has a different kind of validity. Perhaps we should introduce separate terms for the two kinds of "validity."

Proposed resolution:

Add the following text to the end of section 24.1 [lib.iterator.requirements], after paragraph 5:

An invalid iterator is an iterator that may be singular. [Footnote: This definition applies to pointers, since pointers are iterators. The effect of dereferencing an iterator that has been invalidated is undefined.]

[post-Copenhagen: Matt provided wording.]

[Redmond: General agreement with the intent, some objections to the wording. Dave provided new wording.]


280. Comparison of reverse_iterator to const reverse_iterator

Section: 24.4.1 [lib.reverse.iterators]  Status: Open  Submitter: Steve Cleary  Date: 27 Nov 2000

This came from an email from Steve Cleary to Fergus in reference to issue 179. The library working group briefly discussed this in Toronto and believed it should be a separate issue. There was also some reservations about whether this was a worthwhile problem to fix.

Steve said: "Fixing reverse_iterator. std::reverse_iterator can (and should) be changed to preserve these additional requirements." He also said in email that it can be done without breaking user's code: "If you take a look at my suggested solution, reverse_iterator doesn't have to take two parameters; there is no danger of breaking existing code, except someone taking the address of one of the reverse_iterator global operator functions, and I have to doubt if anyone has ever done that. . . But, just in case they have, you can leave the old global functions in as well -- they won't interfere with the two-template-argument functions. With that, I don't see how any user code could break."

Proposed resolution:

Section: 24.4.1.1 [lib.reverse.iterator] add/change the following declarations:

  A) Add a templated assignment operator, after the same manner
        as the templated copy constructor, i.e.:

  template < class U >
  reverse_iterator < Iterator >& operator=(const reverse_iterator< U >& u);

  B) Make all global functions (except the operator+) have
  two template parameters instead of one, that is, for
  operator ==, !=, <, >, <=, >=, - replace:

       template < class Iterator >
       typename reverse_iterator< Iterator >::difference_type operator-(
                 const reverse_iterator< Iterator >& x,
                 const reverse_iterator< Iterator >& y);

  with:

      template < class Iterator1, class Iterator2 >
      typename reverse_iterator < Iterator1 >::difference_type operator-(
                 const reverse_iterator < Iterator1 > & x,
                 const reverse_iterator < Iterator2 > & y);

Also make the addition/changes for these signatures in 24.4.1.3 [lib.reverse.iter.ops].

[ Copenhagen: The LWG is concerned that the proposed resolution introduces new overloads. Experience shows that introducing overloads is always risky, and that it would be inappropriate to make this change without implementation experience. It may be desirable to provide this feature in a different way. ]


282. What types does numpunct grouping refer to?

Section: 22.2.2.2.2 [lib.facet.num.put.virtuals]  Status: Open  Submitter: Howard Hinnant  Date: 5 Dec 2000

Paragraph 16 mistakenly singles out integral types for inserting thousands_sep() characters. This conflicts with the syntax for floating point numbers described under 22.2.3.1/2.

Proposed resolution:

Change paragraph 16 from:

For integral types, punct.thousands_sep() characters are inserted into the sequence as determined by the value returned by punct.do_grouping() using the method described in 22.2.3.1.2 [lib.facet.numpunct.virtuals].

To:

For arithmetic types, punct.thousands_sep() characters are inserted into the sequence as determined by the value returned by punct.do_grouping() using the method described in 22.2.3.1.2 [lib.facet.numpunct.virtuals].

[ Copenhagen: Opinions were divided about whether this is actually an inconsistency, but at best it seems to have been unintentional. This is only an issue for floating-point output: The standard is unambiguous that implementations must parse thousands_sep characters when performing floating-point. The standard is also unambiguous that this requirement does not apply to the "C" locale. ]

[ A survey of existing practice is needed; it is believed that some implementations do insert thousands_sep characters for floating-point output and others fail to insert thousands_sep characters for floating-point input even though this is unambiguously required by the standard. ]


283. std::replace() requirement incorrect/insufficient

Section: 25.2.4 [lib.alg.replace]  Status: Review  Submitter: Martin Sebor  Date: 15 Dec 2000

The requirements in 25.2.4 [lib.alg.replace], p1 that T to be Assignable (23.1 [lib.container.requirements]) is not necessary or sufficient for either of the algorithms. The algorithms require that std::iterator_traits<ForwardIterator>::value_type be Assignable and that both std::iterator_traits<ForwardIterator>::value_type and be EqualityComparable (20.1.1 [lib.equalitycomparable]) with respect to one another.

Further discussion, from Jeremy:

There are a number of problems with the requires clauses for the algorithms in 25.1 [lib.alg.nonmodifying] and 25.2 [lib.alg.modifying.operations]. The requires clause of each algorithm should describe the necessary and sufficient requirements on the inputs to the algorithm such that the algorithm compiles and runs properly. Many of the requires clauses fail to do this. Here is a summary of the kinds of mistakes:

  1. Use of EqualityComparable, which only puts requirements on a single type, when in fact an equality operator is required between two different types, typically either T and the iterators value_type or between the value_type's of two different iterators.
  2. Use of Assignable for T when in fact what was needed is Assignable for the value_type of the iterator, and convertability from T to the value_type of the iterator. Or for output iterators, the requirement should be that T is writable to the iterator (output iterators do not have value types; see issue 324).
  3. Lack of a requires clause.

Here is the list of algorithms that contain mistakes:

Also, in the requirements for EqualityComparable, the requirement that the operator be defined for const objects is lacking.

Proposed resolution:

20.1.1 [lib.equalitycomparable] Change p1 from

In Table 28, T is a type to be supplied by a C++ program instantiating a template, a, b, and c are values of type T.

to

In Table 28, T is a type to be supplied by a C++ program instantiating a template, a, b, and c are values of type const T.

25.1.2 [lib.alg.find] Change p1 from

Requires: Type T is EqualityComparable (20.1.1).

to

Requires: There must be a equality operator defined that accepts type std::iterator_traits<InputIterator>::reference for the left operand and const T for the right operand.

25.1.3 [lib.alg.find.end] Add the following requires clause

Requires: There must be an equality operator defined that accepts type const std::iterator_traits<ForwardIterator1>::value_type for the left operand and const std::iterator_traits<ForwardIterator2>::value_type for the right operand.

25.1.4 [lib.alg.find.first.of] Add the following requires clause

Requires: There must be an equality operator defined that accepts type const std::iterator_traits<ForwardIterator1>::value_type for the left operand and const std::iterator_traits<ForwardIterator2>::value_type for the right operand.

25.1.5 [lib.alg.adjacent.find] Add the following requires clause

Requires: T must be EqualityComparable (20.1.1).

25.1.6 [lib.alg.count] Change p1 from

Requires: Type T is EqualityComparable (20.1.1).

to

Requires: There must be a equality operator defined that accepts type std::iterator_traits<InputIterator>::reference for the left operand and const T for the right operand.

25.1.7 [lib.mismatch] Add the following requires clause

Requires: There must be an equality operator defined that accepts type std::iterator_traits<InputIterator1>::reference for the left operand and std::iterator_traits<InputIterator2>::reference for the right operand.

25.1.8 [lib.alg.equal] Add the following requires clause

Requires: There must be an equality operator defined that accepts type std::iterator_traits<InputIterator1>::reference for the left operand and std::iterator_traits<InputIterator2>::reference for the right operand.

25.1.9 [lib.alg.search] Add the following requires clause

Requires: There must be an equality operator defined that accepts type const std::iterator_traits<ForwardIterator1>::value_type for the left operand and const std::iterator_traits<ForwardIterator2>::value_type for the right operand.

Change change p4 from

Requires: Type T is EqualityComparable (20.1.1), type Size is convertible to integral type (4.7.12.3).

to

Requires: There must be an equality operator defined that accepts const std::iterator_traits<ForwardIterator>::value_type for the left operand and const T for the right operand. The type Size is convertible to integral type (4.7.12.3).

25.2.4 [lib.alg.replace] Change p1 from

Requires: Type T is Assignable (23.1 [lib.container.requirements]) (and, for replace(), EqualityComparable (20.1.1 [lib.equalitycomparable])).

to

Requires: Type std::iterator_traits<ForwardIterator>::value_type is Assignable (23.1 [lib.container.requirements]) and the type const T is convertible to std::iterator_traits<ForwardIterator>::value_type. For replace(), an equality operator must be defined that accepts type std::iterator_traits<ForwardIterator>::reference for the left operand and const T for the right operand.

and change p4 from

Requires: Type T is Assignable (23.1 [lib.container.requirements]) (and, for replace_copy(), EqualityComparable (20.1.1 [lib.equalitycomparable])). The ranges [first, last) and [result, result + (last - first)) shall not overlap.

to

Requires: Both types const T and std::iterator_traits<InputIterator>::reference are writable to the OutputIterator type. For replace_copy() an equality operator must be defined that accepts type std::iterator_traits<InputIterator>::reference for the left operand and const T for the right operand. The ranges [first, last) and [result, result + (last - first)) shall not overlap.

25.2.5 [lib.alg.fill] Change p1 from

Requires: Type T is Assignable (23.1 [lib.container.requirements] ). Size is convertible to an integral type (3.9.1 [basic.fundamental] ).

to

Requires: Type const T is writable to the OutputIterator. Size is convertible to an integral type (3.9.1 [basic.fundamental] ).

25.2.7 [lib.alg.remove] Change p1 from

Requires: Type T is EqualityComparable (20.1.1 [lib.equalitycomparable]).

to

Requires: There must be an equality operator defined that accepts type const std::iterator_traits<ForwardIterator>::value_type for the left operand and const T for the right operand. The type std::iterator_traits<ForwardIterator>::value_type must be Assignable (23.1 [lib.container.requirements]).

284. unportable example in 20.3.7, p6

Section: 20.3.7 [lib.function.pointer.adaptors]  Status: Ready  Submitter: Martin Sebor  Date: 26 Dec 2000

The example in 20.3.7 [lib.function.pointer.adaptors], p6 shows how to use the C library function strcmp() with the function pointer adapter ptr_fun(). But since it's unspecified whether the C library functions have extern "C" or extern "C++" linkage [17.4.2.2 [lib.using.linkage]], and since function pointers with different the language linkage specifications (7.5 [dcl.link]) are incompatible, whether this example is well-formed is unspecified.

Proposed resolution:

Change 20.3.7 [lib.function.pointer.adaptors] paragraph 6 from:

[Example:

    replace_if(v.begin(), v.end(), not1(bind2nd(ptr_fun(strcmp), "C")), "C++");
  

replaces each C with C++ in sequence v.

to:

[Example:

    int compare(const char*, const char*);
    replace_if(v.begin(), v.end(),
               not1(bind2nd(ptr_fun(compare), "abc")), "def");
  

replaces each abc with def in sequence v.

Also, remove footnote 215 in that same paragraph.

[Copenhagen: Minor change in the proposed resolution. Since this issue deals in part with C and C++ linkage, it was believed to be too confusing for the strings in the example to be "C" and "C++". ]

[Redmond: More minor changes. Got rid of the footnote (which seems to make a sweeping normative requirement, even though footnotes aren't normative), and changed the sentence after the footnote so that it corresponds to the new code fragment.]


290. Requirements to for_each and its function object

Section: 25.1.1 [lib.alg.foreach]  Status: Open  Submitter: Angelika Langer  Date: 03 Jan 2001

The specification of the for_each algorithm does not have a "Requires" section, which means that there are no restrictions imposed on the function object whatsoever. In essence it means that I can provide any function object with arbitrary side effects and I can still expect a predictable result. In particular I can expect that the function object is applied exactly last - first times, which is promised in the "Complexity" section.

I don't see how any implementation can give such a guarantee without imposing requirements on the function object.

Just as an example: consider a function object that removes elements from the input sequence. In that case, what does the complexity guarantee (applies f exactly last - first times) mean?

One can argue that this is obviously a nonsensical application and a theoretical case, which unfortunately it isn't. I have seen programmers shooting themselves in the foot this way, and they did not understand that there are restrictions even if the description of the algorithm does not say so.

Proposed resolution:

Add a "Requires" section to section 25.1.1 similar to those proposed for transform and the numeric algorithms (see issue 242):

-2- Requires: In the range [first, last], f shall not invalidate iterators or subranges.

[Copenhagen: The LWG agrees that a function object passed to an algorithm should not invalidate iterators in the range that the algorithm is operating on. The LWG believes that this should be a blanket statement in Clause 25, not just a special requirement for for_each. ]


291. Underspecification of set algorithms

Section: 25.3.5 [lib.alg.set.operations]  Status: Open  Submitter: Matt Austern  Date: 03 Jan 2001

The standard library contains four algorithms that compute set operations on sorted ranges: set_union, set_intersection, set_difference, and set_symmetric_difference. Each of these algorithms takes two sorted ranges as inputs, and writes the output of the appropriate set operation to an output range. The elements in the output range are sorted.

The ordinary mathematical definitions are generalized so that they apply to ranges containing multiple copies of a given element. Two elements are considered to be "the same" if, according to an ordering relation provided by the user, neither one is less than the other. So, for example, if one input range contains five copies of an element and another contains three, the output range of set_union will contain five copies, the output range of set_intersection will contain three, the output range of set_difference will contain two, and the output range of set_symmetric_difference will contain two.

Because two elements can be "the same" for the purposes of these set algorithms, without being identical in other respects (consider, for example, strings under case-insensitive comparison), this raises a number of unanswered questions:

The standard should either answer these questions, or explicitly say that the answers are unspecified. I prefer the former option, since, as far as I know, all existing implementations behave the same way.

Proposed resolution:

[The LWG agrees that the standard should answer these questions. Matt will provide wording.]


294. User defined macros and standard headers

Section: 17.4.3.1.1 [lib.macro.names]  Status: Open  Submitter: James Kanze  Date: 11 Jan 2001

Paragraph 2 of 17.4.3.1.1 [lib.macro.names] reads: "A translation unit that includes a header shall not contain any macros that define names declared in that header." As I read this, it would mean that the following program is legal:

  #define npos 3.14
  #include <sstream>

since npos is not defined in <sstream>. It is, however, defined in <string>, and it is hard to imagine an implementation in which <sstream> didn't include <string>.

I think that this phrase was probably formulated before it was decided that a standard header may freely include other standard headers. The phrase would be perfectly appropriate for C, for example. In light of 17.4.4.1 [lib.res.on.headers] paragraph 1, however, it isn't stringent enough.

Proposed resolution:

In paragraph 2 of 17.4.3.1.1 [lib.macro.names], change "A translation unit that includes a header shall not contain any macros that define names declared in that header." to "A translation unit that includes a header shall not contain any macros that define names declared in any standard header."

[Copenhagen: the general idea is clearly correct, but there is concern about making sure that the two paragraphs in 17.4.3.1.1 [lib.macro.names] remain consistent. Nathan will provide new wording.]


299. Incorrect return types for iterator dereference

Section: 24.1.4 [lib.bidirectional.iterators], 24.1.5 [lib.random.access.iterators]  Status: Open  Submitter: John Potter  Date: 22 Jan 2001

In section 24.1.4 [lib.bidirectional.iterators], Table 75 gives the return type of *r-- as convertible to T. This is not consistent with Table 74 which gives the return type of *r++ as T&. *r++ = t is valid while *r-- = t is invalid.

In section 24.1.5 [lib.random.access.iterators], Table 76 gives the return type of a[n] as convertible to T. This is not consistent with the semantics of *(a + n) which returns T& by Table 74. *(a + n) = t is valid while a[n] = t is invalid.

Discussion from the Copenhagen meeting: the first part is uncontroversial. The second part, operator[] for Random Access Iterators, requires more thought. There are reasonable arguments on both sides. Return by value from operator[] enables some potentially useful iterators, e.g. a random access "iota iterator" (a.k.a "counting iterator" or "int iterator"). There isn't any obvious way to do this with return-by-reference, since the reference would be to a temporary. On the other hand, reverse_iterator takes an arbitrary Random Access Iterator as template argument, and its operator[] returns by reference. If we decided that the return type in Table 76 was correct, we would have to change reverse_iterator. This change would probably affect user code.

History: the contradiction between reverse_iterator and the Random Access Iterator requirements has been present from an early stage. In both the STL proposal adopted by the committee (N0527==94-0140) and the STL technical report (HPL-95-11 (R.1), by Stepanov and Lee), the Random Access Iterator requirements say that operator[]'s return value is "convertible to T". In N0527 reverse_iterator's operator[] returns by value, but in HPL-95-11 (R.1), and in the STL implementation that HP released to the public, reverse_iterator's operator[] returns by reference. In 1995, the standard was amended to reflect the contents of HPL-95-11 (R.1). The original intent for operator[] is unclear.

In the long term it may be desirable to add more fine-grained iterator requirements, so that access method and traversal strategy can be decoupled. (See "Improved Iterator Categories and Requirements", N1297 = 01-0011, by Jeremy Siek.) Any decisions about issue 299 should keep this possibility in mind.

Proposed resolution:

In section 24.1.4 [lib.bidirectional.iterators], change the return type in table 75 from "convertible to T" to T&.

In section 24.1.5 [lib.random.access.iterators], change the return type in table 76 from "convertible to T" to T&.


300. list::merge() specification incomplete

Section: 23.2.2.4 [lib.list.ops]  Status: Open  Submitter: John Pedretti  Date: 23 Jan 2001

The "Effects" clause for list::merge() (23.2.2.4, p23) appears to be incomplete: it doesn't cover the case where the argument list is identical to *this (i.e., this == &x). The requirement in the note in p24 (below) is that x be empty after the merge which is surely unintended in this case.

Proposed resolution:

Change 23.2.2.4, p23 to:

Effects: If &x == this, does nothing; otherwise, merges the argument list into the list.

[Copenhagen: The proposed resolution does not fix all of the problems in 23.2.2.4 [lib.list.ops], p22-25. Three different paragraphs (23, 24, 25) describe the effects of merge. Changing p23, without changing the other two, appears to introduce contradictions. Additionally, "merges the argument list into the list" is excessively vague.]


304. Must *a return an lvalue when a is an input iterator?

Section: 24.1 [lib.iterator.requirements]  Status: Open  Submitter: Dave Abrahams  Date: 5 Feb 2001

We all "know" that input iterators are allowed to produce values when dereferenced of which there is no other in-memory copy.

But: Table 72, with a careful reading, seems to imply that this can only be the case if the value_type has no members (e.g. is a built-in type).

The problem occurs in the following entry:

  a->m     pre: (*a).m is well-defined
           Equivalent to (*a).m

*a.m can be well-defined if *a is not a reference type, but since operator->() must return a pointer for a->m to be well-formed, it needs something to return a pointer to. This seems to indicate that *a must be buffered somewhere to make a legal input iterator.

I don't think this was intentional.

Proposed resolution:

[Copenhagen: the two obvious possibilities are to keep the operator-> requirement for Input Iterators, and put in a non-normative note describing how it can be implemented with proxies, or else moving the operator-> requirement from Input Iterator to Forward Iterator. If we do the former we'll also have to change istreambuf_iterator, because it has no operator->. A straw poll showed roughly equal support for the two options.]


305. Default behavior of codecvt<wchar_t, char, mbstate_t>::length()

Section: 22.2.1.5.2 [lib.locale.codecvt.virtuals]  Status: Review  Submitter: Howard Hinnant  Date: 24 Jan 2001

22.2.1.5/3 introduces codecvt in part with:

codecvt<wchar_t,char,mbstate_t> converts between the native character sets for tiny and wide characters. Instantiations on mbstate_t perform conversion between encodings known to the library implementor.

But 22.2.1.5.2/10 describes do_length in part with:

... codecvt<wchar_t, char, mbstate_t> ... return(s) the lesser of max and (from_end-from).

The semantics of do_in and do_length are linked. What one does must be consistent with what the other does. 22.2.1.5/3 leads me to believe that the vendor is allowed to choose the algorithm that codecvt<wchar_t,char,mbstate_t>::do_in performs so that it makes his customers happy on a given platform. But 22.2.1.5.2/10 explicitly says what codecvt<wchar_t,char,mbstate_t>::do_length must return. And thus indirectly specifies the algorithm that codecvt<wchar_t,char,mbstate_t>::do_in must perform. I believe that this is not what was intended and is a defect.

Discussion from the -lib reflector:
This proposal would have the effect of making the semantics of all of the virtual functions in codecvt<wchar_t, char, mbstate_t> implementation specified. Is that what we want, or do we want to mandate specific behavior for the base class virtuals and leave the implementation specified behavior for the codecvt_byname derived class? The tradeoff is that former allows implementors to write a base class that actually does something useful, while the latter gives users a way to get known and specified---albeit useless---behavior, and is consistent with the way the standard handles other facets. It is not clear what the original intention was.

Nathan has suggest a compromise: a character that is a widened version of the characters in the basic execution character set must be converted to a one-byte sequence, but there is no such requirement for characters that are not part of the basic execution character set.

Proposed resolution:

Change 22.2.1.5.2/5 from:

The instantiations required in Table 51 (lib.locale.category), namely codecvt<wchar_t,char,mbstate_t> and codecvt<char,char,mbstate_t>, store no characters. Stores no more than (to_limit-to) destination elements. It always leaves the to_next pointer pointing one beyond the last element successfully stored.

to:

Stores no more than (to_limit-to) destination elements, and leaves the to_next pointer pointing one beyond the last element successfully stored. codecvt<char,char,mbstate_t> stores no characters.

Change 22.2.1.5.2/10 from:

-10- Returns: (from_next-from) where from_next is the largest value in the range [from,from_end] such that the sequence of values in the range [from,from_next) represents max or fewer valid complete characters of type internT. The instantiations required in Table 51 (21.1.1.1.1), namely codecvt<wchar_t, char, mbstate_t> and codecvt<char, char, mbstate_t>, return the lesser of max and (from_end-from).

to:

-10- Returns: (from_next-from) where from_next is the largest value in the range [from,from_end] such that the sequence of values in the range [from,from_next) represents max or fewer valid complete characters of type internT. The instantiation codecvt<char, char, mbstate_t> returns the lesser of max and (from_end-from).

[Redmond: Nathan suggested an alternative resolution: same as above, but require that, in the default encoding, a character from the basic execution character set would map to a single external character. The straw poll was 8-1 in favor of the proposed resolution.]

Rationale:

The default encoding should be whatever users of a given platform would expect to be the most natural. This varies from platform to platform. In many cases there is a preexisting C library, and users would expect the default encoding to be whatever C uses in the default "C" locale. We could impose a guarantee like the one Nathan suggested (a character from the basic execution character set must map to a single external character), but this would rule out important encodings that are in common use: it would rule out Shift-JIS, for example, and it would rule out a fixed-width encoding of UCS-4.


309. Does sentry catch exceptions?

Section: 27.6 [lib.iostream.format]  Status: Open  Submitter: Martin Sebor  Date: 19 Mar 2001

The descriptions of the constructors of basic_istream<>::sentry (27.6.1.1.2 [lib.istream::sentry]) and basic_ostream<>::sentry (27.6.2.3 [lib.ostream::sentry]) do not explain what the functions do in case an exception is thrown while they execute. Some current implementations allow all exceptions to propagate, others catch them and set ios_base::badbit instead, still others catch some but let others propagate.

The text also mentions that the functions may call setstate(failbit) (without actually saying on what object, but presumably the stream argument is meant). That may have been fine for basic_istream<>::sentry prior to issue 195, since the function performs an input operation which may fail. However, issue 195 amends 27.6.1.1.2 [lib.istream::sentry], p2 to clarify that the function should actually call setstate(failbit | eofbit), so the sentence in p3 is redundant or even somewhat contradictory.

The same sentence that appears in 27.6.2.3 [lib.ostream::sentry], p3 doesn't seem to be very meaningful for basic_istream<>::sentry which performs no input. It is actually rather misleading since it would appear to guide library implementers to calling setstate(failbit) when os.tie()->flush(), the only called function, throws an exception (typically, it's badbit that's set in response to such an event).

Proposed resolution:

Add the following paragraph immediately after 27.6.1.1.2 [lib.istream::sentry], p5

If an exception is thrown during the preparation then ios::badbit is turned on* in is's error state.

[Footnote: This is done without causing an ios::failure to be thrown. --- end footnote]

If (is.exceptions() & ios_base::badbit)!= 0 then the exception is rethrown.

And strike the following sentence from 27.6.1.1.2 [lib.istream::sentry], p5

During preparation, the constructor may call setstate(failbit) (which may throw ios_base::failure (lib.iostate.flags))

Add the following paragraph immediately after 27.6.2.3 [lib.ostream::sentry], p3

If an exception is thrown during the preparation then ios::badbit is turned on* in os's error state.

[Footnote: This is done without causing an ios::failure to be thrown. --- end footnote]

If (os.exceptions() & ios_base::badbit)!= 0 then the exception is rethrown.

And strike the following sentence from 27.6.2.3 [lib.ostream::sentry], p3

During preparation, the constructor may call setstate(failbit) (which may throw ios_base::failure (lib.iostate.flags))

(Note that the removal of the two sentences means that the ctors will not be able to report the failure of any implementation-dependent operations referred to in footnotes 280 and 293, unless such operations throw an exception.)

[ Copenhagen: It was agreed that there was an issue here, but there was disagreement about the resolution. Some LWG members argued that a sentry's constructor should not catch exceptions, because sentries should only be used within (un)formatted input functions and that exception handling is the responsibility of those functions, not of the sentries. ]


310. Is errno a macro?

Section: 17.4.1.2 [lib.headers], 19.3 [lib.errno]  Status: Ready  Submitter: Steve Clamage  Date: 21 Mar 2001

Exactly how should errno be declared in a conforming C++ header?

The C standard says in 7.1.4 that it is unspecified whether errno is a macro or an identifier with external linkage. In some implementations it can be either, depending on compile-time options. (E.g., on Solaris in multi-threading mode, errno is a macro that expands to a function call, but is an extern int otherwise. "Unspecified" allows such variability.)

The C++ standard:

I find no other references to errno.

We should either explicitly say that errno must be a macro, even though it need not be a macro in C, or else explicitly leave it unspecified. We also need to say something about namespace std. A user who includes <cerrno> needs to know whether to write errno, or ::errno, or std::errno, or else <cerrno> is useless.

Two acceptable fixes:

[ This issue was first raised in 1999, but it slipped through the cracks. ]

Proposed resolution:

Change the Note in section 17.4.1.2p5 from

Note: the names defined as macros in C include the following: assert, errno, offsetof, setjmp, va_arg, va_end, and va_start.

to

Note: the names defined as macros in C include the following: assert, offsetof, setjmp, va_arg, va_end, and va_start.

In section 19.3, change paragraph 2 from

The contents are the same as the Standard C library header <errno.h>.

to

The contents are the same as the Standard C library header <errno.h>, except that errno shall be defined as a macro.

Rationale:

C++ must not leave it up to the implementation to decide whether or not a name is a macro; it must explicitly specify exactly which names are required to be macros.


311. Incorrect wording in basic_ostream class synopsis

Section: 27.6.2.1 [lib.ostream]  Status: Ready  Submitter: Andy Sawyer  Date: 21 Mar 2001

In 27.6.2.1 [lib.ostream], the synopsis of class basic_ostream says:

  // partial specializationss
  template<class traits>
    basic_ostream<char,traits>& operator<<( basic_ostream<char,traits>&,
                                            const char * );

Problems:

Proposed resolution:

In the synopsis in 27.6.2.1 [lib.ostream], remove the // partial specializationss comment. Also remove the same comment (correctly spelled, but still incorrect) from the synopsis in 27.6.2.5.4 [lib.ostream.inserters.character].

[ Pre-Redmond: added 27.6.2.5.4 [lib.ostream.inserters.character] because of Martin's comment in c++std-lib-8939. ]


315. Bad "range" in list::unique complexity

Section: 23.2.2.4 [lib.list.ops]  Status: Ready  Submitter: Andy Sawyer  Date: 1 May 2001

23.2.2.4 [lib.list.ops], Para 21 describes the complexity of list::unique as: "If the range (last - first) is not empty, exactly (last - first) -1 applications of the corresponding predicate, otherwise no applications of the predicate)".

"(last - first)" is not a range.

Proposed resolution:

Change the "range" from (last - first) to [first, last).


316. Vague text in Table 69

Section: 23.1.2 [lib.associative.reqmts]  Status: Ready  Submitter: Martin Sebor  Date: 4 May 2001

Table 69 says this about a_uniq.insert(t):

inserts t if and only if there is no element in the container with key equivalent to the key of t. The bool component of the returned pair indicates whether the insertion takes place and the iterator component of the pair points to the element with key equivalent to the key of t.

The description should be more specific about exactly how the bool component indicates whether the insertion takes place.

Proposed resolution:

Change the text in question to

...The bool component of the returned pair is true if and only if the insertion takes place...

317. Instantiation vs. specialization of facets

Section: 22 [lib.localization]  Status: Ready  Submitter: Martin Sebor  Date: 4 May 2001

The localization section of the standard refers to specializations of the facet templates as instantiations even though the required facets are typically specialized rather than explicitly (or implicitly) instantiated. In the case of ctype<char> and ctype_byname<char> (and the wchar_t versions), these facets are actually required to be specialized. The terminology should be corrected to make it clear that the standard doesn't mandate explicit instantiation (the term specialization encompasses both explicit instantiations and specializations).

Proposed resolution:

In the following paragraphs, replace all occurrences of the word instantiation or instantiations with specialization or specializations, respectively:

22.1.1.1.1, p4, Table 52, 22.2.1.1, p2, 22.2.1.5, p3, 22.2.1.5.1, p5, 22.2.1.5.2, p10, 22.2.2, p2, 22.2.3.1, p1, 22.2.3.1.2, p1, p2 and p3, 22.2.4.1, p1, 22.2.4.1.2, p1, 22,2,5, p1, 22,2,6, p2, 22.2.6.3.2, p7, and Footnote 242.

And change the text in 22.1.1.1.1, p4 from

An implementation is required to provide those instantiations for facet templates identified as members of a category, and for those shown in Table 52:

to

An implementation is required to provide those specializations...

[Nathan will review these changes, and will look for places where explicit specialization is necessary.]

Rationale:

This is a simple matter of outdated language. The language to describe templates was clarified during the standardization process, but the wording in clause 22 was never updated to reflect that change.


318. Misleading comment in definition of numpunct_byname

Section: 22.2.3.2 [lib.locale.numpunct.byname]  Status: Ready  Submitter: Martin Sebor  Date: 12 May 2001

The definition of the numpunct_byname template contains the following comment:

    namespace std {
        template <class charT>
        class numpunct_byname : public numpunct<charT> {
    // this class is specialized for char and wchar_t.
        ...

There is no documentation of the specializations and it seems conceivable that an implementation will not explicitly specialize the template at all, but simply provide the primary template.

Proposed resolution:

Remove the comment from the text in 22.2.3.2 and from the proposed resolution of library issue 228.


319. Storage allocation wording confuses "Required behavior", "Requires"

Section: 18.4.1.1 [lib.new.delete.single], 18.4.1.2 [lib.new.delete.array]  Status: Ready  Submitter: Beman Dawes  Date: 15 May 2001

The standard specifies 17.3.1.3 [lib.structure.specifications] that "Required behavior" elements describe "the semantics of a function definition provided by either the implementation or a C++ program."

The standard specifies 17.3.1.3 [lib.structure.specifications] that "Requires" elements describe "the preconditions for calling the function."

In the sections noted below, the current wording specifies "Required Behavior" for what are actually preconditions, and thus should be specified as "Requires".

Proposed resolution:

In 18.4.1.1 [lib.new.delete.single] Para 12 Change:

Required behavior: accept a value of ptr that is null or that was returned by an earlier call ...

to:

Requires: the value of ptr is null or the value returned by an earlier call ...

In 18.4.1.2 [lib.new.delete.array] Para 11 Change:

Required behavior: accept a value of ptr that is null or that was returned by an earlier call ...

to:

Requires: the value of ptr is null or the value returned by an earlier call ...


320. list::assign overspecified

Section: 23.2.2.1 [lib.list.cons]  Status: Review  Submitter: Howard Hinnant  Date: 17 May 2001

Section 23.2.2.1, paragraphs 6-8 specify that list assign (both forms) have the "effects" of a call to erase followed by a call to insert.

I would like to document that implementers have the freedom to implement assign by other methods, as long as the end result is the same and the exception guarantee is as good or better than the basic guarantee.

The motivation for this is to use T's assignment operator to recycle existing nodes in the list instead of erasing them and reallocating them with new values. It is also worth noting that, with careful coding, most common cases of assign (everything but assignment with true input iterators) can elevate the exception safety to strong if T's assignment has a nothrow guarantee (with no extra memory cost). Metrowerks does this. However I do not propose that this subtlety be standardized. It is a QoI issue.

Existing practise: Metrowerks and SGI recycle nodes, Dinkumware and Rogue Wave don't.

Proposed resolution:

Change 23.2.2.1/7 from:

Effects:

   erase(begin(), end());
   insert(begin(), first, last);

to:

Effects: Replaces the contents of the list with the range [first, last).

In 23.1.1 [lib.sequence.reqmts], in Table 67 (sequence requirements), add a new row:

      a.assign(i,j)     void      pre: i,j are not iterators into a.
                                  Replaces elements in a with copies
                                  of elements in [i, j).

Change 23.2.2.1/8 from:

Effects:

   erase(begin(), end());
   insert(begin(), n, t);

to:

Effects: Replaces the contents of the list with n copies of t.

[Redmond: Proposed resolution was changed slightly. Previous version made explicit statement about exception safety, which wasn't consistent with the way exception safety is expressed elsewhere. Also, the change in the sequence requirements is new. Without that change, the proposed resolution would have required that assignment of a subrange would have to work. That too would have been overspecification; it would effectively mandate that assignment use a temporary. ]


321. Typo in num_get

Section: 22.2.2.1.2 [lib.facet.num.get.virtuals]  Status: Ready  Submitter: Kevin Djang  Date: 17 May 2001

Section 22.2.2.1.2 at p7 states that "A length specifier is added to the conversion function, if needed, as indicated in Table 56." However, Table 56 uses the term "length modifier", not "length specifier".

Proposed resolution:

In 22.2.2.1.2 at p7, change the text "A length specifier is added ..." to be "A length modifier is added ..."

Rationale:

C uses the term "length modifier". We should be consistent.


322. iterator and const_iterator should have the same value type

Section: 23.1 [lib.container.requirements]  Status: Ready  Submitter: Matt Austern  Date: 17 May 2001

It's widely assumed that, if X is a container, iterator_traits<X::iterator>::value_type and iterator_traits<X::const_iterator>::value_type should both be X::value_type. However, this is nowhere stated. The language in Table 65 is not precise about the iterators' value types (it predates iterator_traits), and could even be interpreted as saying that iterator_traits<X::const_iterator>::value_type should be "const X::value_type".

Related issue: 279.

Proposed resolution:

In Table 65 ("Container Requirements"), change the return type for X::iterator to "iterator type whose value type is T". Change the return type for X::const_iterator to "constant iterator type whose value type is T".

Rationale:

This belongs as a container requirement, rather than an iterator requirement, because the whole notion of iterator/const_iterator pairs is specific to containers' iterator.

It is existing practice that (for example) iterator_traits<list<int>::const_iterator>::value_type is "int", rather than "const int". This is consistent with the way that const pointers are handled: the standard already requires that iterator_traits<const int*>::value_type is int.


323. abs() overloads in different headers

Section: 26.5 [lib.c.math]  Status: Open  Submitter: Dave Abrahams  Date: 4 June 2001

Currently the standard mandates the following overloads of abs():

    abs(long), abs(int) in <cstdlib>

    abs(float), abs(double), abs(long double) in <cmath>

    template<class T> T abs(const complex<T>&) in <complex>

    template<class T> valarray<T> abs(const valarray<T>&); in <valarray>

The problem is that having only some overloads visible of a function that works on "implicitly inter-convertible" types is dangerous in practice. The headers that get included at any point in a translation unit can change unpredictably during program development/maintenance. The wrong overload might be unintentionally selected.

Currently, there is nothing that mandates the simultaneous visibility of these overloads. Indeed, some vendors have begun fastidiously reducing dependencies among their (public) headers as a QOI issue: it helps people to write portable code by refusing to compile unless all the correct headers are #included.

The same issue may exist for other functions in the library.

Redmond: PJP reports that C99 adds two new kinds of abs: comples, and int_max_abs.

Related issue: 343.

Proposed resolution:

[Redmond: General agreement that the current situation is somewhat fragile. No consensus on whether it's more fragile than any number of other things, or whether there's any good way to fix it. Walter suggests that abs should be defined for all built-in types in both <cmath> and <cstdlib>, but that no effort should be made to put all overloads for class types in one place. Beman suggests closing this issue as "NAD Future", and adding a <all> header as an extension. The <all> header would solve a more general problem: users who can't remember which names are defined in which headers. (See issue 343)]


324. Do output iterators have value types?

Section: 24.1.2 [lib.output.iterators]  Status: Review  Submitter: Dave Abrahams  Date: 7 June 2001

Table 73 suggests that output iterators have value types. It requires the expression "*a = t". Additionally, although Table 73 never lists "a = t" or "X(a) = t" in the "expressions" column, it contains a note saying that "a = t" and "X(a) = t" have equivalent (but nowhere specified!) semantics.

According to 24.1/9, t is supposed to be "a value of value type T":

In the following sections, a and b denote values of X, n denotes a value of the difference type Distance, u, tmp, and m denote identifiers, r denotes a value of X&, t denotes a value of value type T.

Two other parts of the standard that are relevant to whether output iterators have value types:

The first of these passages suggests that "*i" is supposed to return a useful value, which contradicts the note in 24.1.2/2 saying that the only valid use of "*i" for output iterators is in an expression of the form "*i = t". The second of these passages appears to contradict Table 73, because it suggests that "*i"'s return value should be void. The second passage is also broken in the case of a an iterator type, like non-const pointers, that satisfies both the output iterator requirements and the forward iterator requirements.

What should the standard say about *i's return value when i is an output iterator, and what should it say about that t is in the expression "*i = t"? Finally, should the standard say anything about output iterators' pointer and reference types?

Proposed resolution:

24.1 p1, change

All iterators i support the expression *i, resulting in a value of some class, enumeration, or built-in type T, called the value type of the itereator.

to

All input iterators i support the expression *i, resulting in a value of some class, enumeration, or built-in type T, called the value type of the iterator. All output iterators support the expression *i = o where o is a value of some type that is in the set of types that are writable to the particular iterator type of i.

24.1 p9, add

o denotes a value of some type that is writable to the output iterator.

Table 73, change

*a = t

to

*r = o

and change

*r++ = t

to

*r++ = o

[post-Redmond: Jeremy provided wording]

Rationale:

The LWG considered two options: change all of the language that seems to imply that output iterators have value types, thus making it clear that output iterators have no value types, or else define value types for output iterator consistently. The LWG chose the former option, because it seems clear that output iterators were never intended to have value types. This was a deliberate design decision, and any language suggesting otherwise is simply a mistake.

A future revision of the standard may wish to revisit this design decision.


325. Misleading text in moneypunct<>::do_grouping

Section: 22.2.6.3.2 [lib.locale.moneypunct.virtuals]  Status: Review  Submitter: Martin Sebor  Date: 02 Jul 2001

The Returns clause in 22.2.6.3.2, p3 says about moneypunct<charT>::do_grouping()

Returns: A pattern defined identically as the result of numpunct<charT>::do_grouping().241)

Footnote 241 then reads

This is most commonly the value "\003" (not "3").

The returns clause seems to imply that the two member functions must return an identical value which in reality may or may not be true, since the facets are usually implemented in terms of struct std::lconv and return the value of the grouping and mon_grouping, respectively. The footnote also implies that the member function of the moneypunct facet (rather than the overridden virtual functions in moneypunct_byname) most commonly return "\003", which contradicts the C standard which specifies the value of "" for the (most common) C locale.

Proposed resolution:

Replace the text in Returns clause in 22.2.6.3.2, p3 with the following:

Returns: A pattern defined identically as, but not necessarily equal to, the result of numpunct<charT>::do_grouping().241)

and replace the text in Footnote 241 with the following:

To specify grouping by 3s the value is "\003", not "3".

Rationale:

The fundamental problem is that the description of the locale facet virtuals serves two purposes: describing the behavior of the base class, and describing the meaning of and constraints on the behavior in arbitrary derived classes. The new wording makes that separation a little bit clearer. The footnote (which is nonnormative) is not supposed to say what the grouping is in the "C" locale or in any other locale. It is just a reminder that the values are interpreted as small integers, not ASCII characters.


327. Typo in time_get facet in table 52

Section: 22.1.1.1.1 [lib.locale.category]  Status: Ready  Submitter: Tiki Wan  Date: 06 Jul 2001

The wchar_t versions of time_get and time_get_byname are listed incorrectly in table 52, required instantiations. In both cases the second template parameter is given as OutputIterator. It should instead be InputIterator, since these are input facets.

Proposed resolution:

In table 52, required instantiations, in 22.1.1.1.1 [lib.locale.category], change

    time_get<wchar_t, OutputIterator>
    time_get_byname<wchar_t, OutputIterator>

to

    time_get<wchar_t, InputIterator>
    time_get_byname<wchar_t, InputIterator>

[Redmond: Very minor change in proposed resolution. Original had a typo, wchart instead of wchar_t.]


328. Bad sprintf format modifier in money_put<>::do_put()

Section: 22.2.6.2.2 [lib.locale.money.put.virtuals]  Status: Ready  Submitter: Martin Sebor  Date: 07 Jul 2001

The sprintf format string , "%.01f" (that's the digit one), in the description of the do_put() member functions of the money_put facet in 22.2.6.2.2, p1 is incorrect. First, the f format specifier is wrong for values of type long double, and second, the precision of 01 doesn't seem to make sense. What was most likely intended was "%.0Lf"., that is a precision of zero followed by the L length modifier.

Proposed resolution:

Change the format string to "%.0Lf".

Rationale:

Fixes an obvious typo


329. vector capacity, reserve and reallocation

Section: 23.2.4.2 [lib.vector.capacity], 23.2.4.3 [lib.vector.modifiers]  Status: Review  Submitter: Anthony Williams  Date: 13 Jul 2001

There is an apparent contradiction about which circumstances can cause a reallocation of a vector in Section 23.2.4.2 [lib.vector.capacity] and section 23.2.4.3 [lib.vector.modifiers].

23.2.4.2p5 says:

Notes: Reallocation invalidates all the references, pointers, and iterators referring to the elements in the sequence. It is guaranteed that no reallocation takes place during insertions that happen after a call to reserve() until the time when an insertion would make the size of the vector greater than the size specified in the most recent call to reserve().

Which implies if I do

  std::vector<int> vec;
  vec.reserve(23);
  vec.reserve(0);
  vec.insert(vec.end(),1);

then the implementation may reallocate the vector for the insert, as the size specified in the previous call to reserve was zero.

However, the previous paragraphs (23.2.4.2, p1-2) state:

(capacity) Returns: The total number of elements the vector can hold without requiring reallocation

...After reserve(), capacity() is greater or equal to the argument of reserve if reallocation happens; and equal to the previous value of capacity() otherwise...

This implies that vec.capacity() is still 23, and so the insert() should not require a reallocation, as vec.size() is 0. This is backed up by 23.2.4.3p1:

(insert) Notes: Causes reallocation if the new size is greater than the old capacity.

Though this doesn't rule out reallocation if the new size is less than the old capacity, I think the intent is clear.

Proposed resolution:

Change the wording of 23.2.4.2 [lib.vector.capacity] paragraph 5 to:

Notes: Reallocation invalidates all the references, pointers, and iterators referring to the elements in the sequence. It is guaranteed that no reallocation takes place during insertions that happen after a call to reserve() until the time when an insertion would make the size of the vector greater than the value of capacity().

[Redmond: original proposed resolution was modified slightly. In the original, the guarantee was that there would be no reallocation until the size would be greater than the value of capacity() after the most recent call to reserve(). The LWG did not believe that the "after the most recent call to reserve()" added any useful information.]

Rationale:

There was general agreement that, when reserve() is called twice in succession and the argument to the second invocation is smaller than the argument to the first, the intent was for the second invocation to have no effect. Wording implying that such cases have an effect on reallocation guarantees was inadvertant.


331. bad declaration of destructor for ios_base::failure

Section: 27.4.2.1.1 [lib.ios::failure]  Status: Ready  Submitter: PremAnand M. Rao  Date: 23 Aug 2001

With the change in 17.4.4.8 [lib.res.on.exception.handling] to state "An implementation may strengthen the exception-specification for a non-virtual function by removing listed exceptions." (issue 119) and the following declaration of ~failure() in ios_base::failure

    namespace std {
       class ios_base::failure : public exception {
       public:
           ...
           virtual ~failure();
           ...
       };
     }

the class failure cannot be implemented since in 18.6.1 [lib.exception] the destructor of class exception has an empty exception specification:

    namespace std {
       class exception {
       public:
         ...
         virtual ~exception() throw();
         ...
       };
     }

Proposed resolution:

Remove the declaration of ~failure().

Rationale:

The proposed resolution is consistent with the way that destructors of other classes derived from exception are handled.


333. does endl imply synchronization with the device?

Section: 27.6.2.7 [lib.ostream.manip]  Status: Review  Submitter: PremAnand M. Rao  Date: 27 Aug 2001

A footnote in 27.6.2.7 [lib.ostream.manip] states:

[Footnote: The effect of executing cout << endl is to insert a newline character in the output sequence controlled by cout, then synchronize it with any external file with which it might be associated. --- end foonote]

Does the term "file" here refer to the external device? This leads to some implementation ambiguity on systems with fully buffered files where a newline does not cause a flush to the device.

Choosing to sync with the device leads to significant performance penalties for each call to endl, while not sync-ing leads to errors under special circumstances.

I could not find any other statement that explicitly defined the behavior one way or the other.

Proposed resolution:

Remove footnote 300 from section 27.6.2.7 [lib.ostream.manip].

Rationale:

We already have normative text saying what endl does: it inserts a newline character and calls flush. This footnote is at best redundant, at worst (as this issue says) misleading, because it appears to make promises about what flush does.


334. map::operator[] specification forces inefficient implementation

Section: 23.3.1.2 [lib.map.access]  Status: Review  Submitter: Andrea Griffini  Date: 02 Sep 2001

The current standard describes map::operator[] using a code example. That code example is however quite inefficient because it requires several useless copies of both the passed key_type value and of default constructed mapped_type instances. My opinion is that was not meant by the comitee to require all those temporary copies.

Currently map::operator[] behaviour is specified as:

  Returns:
    (*((insert(make_pair(x, T()))).first)).second.

This specification however uses make_pair that is a template function of which parameters in this case will be deduced being of type const key_type& and const T&. This will create a pair<key_type,T> that isn't the correct type expected by map::insert so another copy will be required using the template conversion constructor available in pair to build the required pair<const key_type,T> instance.

If we consider calling of key_type copy constructor and mapped_type default constructor and copy constructor as observable behaviour (as I think we should) then the standard is in this place requiring two copies of a key_type element plus a default construction and two copy construction of a mapped_type (supposing the addressed element is already present in the map; otherwise at least another copy construction for each type).

A simple (half) solution would be replacing the description with:

  Returns:
    (*((insert(value_type(x, T()))).first)).second.

This will remove the wrong typed pair construction that requires one extra copy of both key and value.

However still the using of map::insert requires temporary objects while the operation, from a logical point of view, doesn't require any.

I think that a better solution would be leaving free an implementer to use a different approach than map::insert that, because of its interface, forces default constructed temporaries and copies in this case. The best solution in my opinion would be just requiring map::operator[] to return a reference to the mapped_type part of the contained element creating a default element with the specified key if no such an element is already present in the container. Also a logarithmic complexity requirement should be specified for the operation.

This would allow library implementers to write alternative implementations not using map::insert and reaching optimal performance in both cases of the addressed element being present or absent from the map (no temporaries at all and just the creation of a new pair inside the container if the element isn't present). Some implementer has already taken this option but I think that the current wording of the standard rules that as non-conforming.

Proposed resolution:

Replace 23.3.1.2 [lib.map.access] paragraph 1 with

-1- Effects: If there is no key equivalent to x in the map, inserts value_type(x, T()) into the map.

-2- Returns: A reference to the mapped_type corresponding to x in *this.

-3- Complexity: logarithmic.

[This is the second option mentioned above. Howard provided wording. We may also wish to have a blanket statement somewhere in clause 17 saying that we do not intend the semantics of sample code fragments to be interpreted as specifing exactly how many copies are made. See issue 98 for a similar problem.]


335. minor issue with char_traits, table 37

Section: 21.1.1 [lib.char.traits.require]  Status: Ready  Submitter: Andy Sawyer  Date: 06 Sep 2001

Table 37, in 21.1.1 [lib.char.traits.require], descibes char_traits::assign as:

  X::assign(c,d)   assigns c = d.

And para 1 says:

[...] c and d denote values of type CharT [...]

Naturally, if c and d are values, then the assignment is (effectively) meaningless. It's clearly intended that (in the case of assign, at least), 'c' is intended to be a reference type.

I did a quick survey of the four implementations I happened to have lying around, and sure enough they all have signatures:

    assign( charT&, const charT& );

(or the equivalent). It's also described this way in Nico's book. (Not to mention the synopses of char_traits<char> in 21.1.3.1 and char_traits<wchar_t> in 21.1.3.2...)

Proposed resolution:

Add the following to 21.1.1 para 1:

r denotes an lvalue of CharT

and change the description of assign in the table to:

  X::assign(r,d)   assigns r = d

336. Clause 17 lack of references to deprecated headers

Section: 17 [lib.library]  Status: Open  Submitter: Detlef Vollmann  Date: 05 Sep 2001

From c++std-edit-873:

17.4.1.2 [lib.headers], Table 11. In this table, the header <strstream> is missing.

This shows a general problem: The whole clause 17 refers quite often to clauses 18 through 27, but D.7 is also a part of the standard library (though a deprecated one).

Proposed resolution:

[Redmond: The LWG agrees that <strstream> should be added to table 11. A review is needed to determine whether there are any other places in clause 17 where clause D material should be referred to. Beman will review clause 17.]


337. replace_copy_if's template parameter should be InputIterator

Section: 25.2.4 [lib.alg.replace]  Status: Ready  Submitter: Detlef Vollmann  Date: 07 Sep 2001

From c++std-edit-876:

In section 25.2.4 [lib.alg.replace] before p4: The name of the first parameter of template replace_copy_if should be "InputIterator" instead of "Iterator". According to 17.3.2.1 [lib.type.descriptions] p1 the parameter name conveys real normative meaning.

Proposed resolution:

Change Iterator to InputIterator.


338.  is whitespace allowed between `-' and a digit?

Section: 22.2 [lib.locale.categories]  Status: Review  Submitter: Martin Sebor  Date: 17 Sep 2001

From Stage 2 processing in 22.2.2.1.2 [lib.facet.num.get.virtuals], p8 and 9 (the original text or the text corrected by the proposed resolution of issue 221) it seems clear that no whitespace is allowed within a number, but 22.2.3.1 [lib.locale.numpunct], p2, which gives the format for integer and floating point values, says that whitespace is optional between a plusminus and a sign.

The text needs to be clarified to either consistently allow or disallow whitespace between a plusminus and a sign. It might be worthwhile to consider the fact that the C library stdio facility does not permit whitespace embedded in numbers and neither does the C or C++ core language (the syntax of integer-literals is given in 2.13.1 [lex.icon], that of floating-point-literals in 2.13.3 [lex.fcon] of the C++ standard).

Proposed resolution:

Change the first part of 22.2.3.1 [lib.locale.numpunct] paragraph 2 from:

The syntax for number formats is as follows, where digit represents the radix set specified by the fmtflags argument value, whitespace is as determined by the facet ctype<charT> (22.2.1.1), and thousands-sep and decimal-point are the results of corresponding numpunct<charT> members. Integer values have the format:

  integer   ::= [sign] units
  sign      ::= plusminus [whitespace]
  plusminus ::= '+' | '-'
  units     ::= digits [thousands-sep units]
  digits    ::= digit [digits]

to:

The syntax for number formats is as follows, where digit represents the radix set specified by the fmtflags argument value, and thousands-sep and decimal-point are the results of corresponding numpunct<charT> members. Integer values have the format:

  integer   ::= [sign] units
  sign      ::= plusminus
  plusminus ::= '+' | '-'
  units     ::= digits [thousands-sep units]
  digits    ::= digit [digits]

Rationale:

It's not clear whether the format described in 22.2.3.1 [lib.locale.numpunct] paragraph 2 has any normative weight: nothing in the standard says how, or whether, it's used. However, there's no reason for it to differ gratuitously from the very specific description of numeric processing in 22.2.2.1.2 [lib.facet.num.get.virtuals]. The proposed resolution removes all mention of "whitespace" from that format.


339. definition of bitmask type restricted to clause 27

Section: 22.2.1 [lib.category.ctype], 17.3.2.1.2 [lib.bitmask.types]  Status: Review  Submitter: Martin Sebor  Date: 17 September 2001

The ctype_category::mask type is declared to be an enum in 22.2.1 [lib.category.ctype] with p1 then stating that it is a bitmask type, most likely referring to the definition of bitmask type in 17.3.2.1.2 [lib.bitmask.types], p1. However, the said definition only applies to clause 27, making the reference in 22.2.1 somewhat dubious.

Proposed resolution:

Clarify 17.3.2.1.2, p1 by changing the current text from

Several types defined in clause 27 are bitmask types. Each bitmask type can be implemented as an enumerated type that overloads certain operators, as an integer type, or as a bitset (23.3.5 [lib.template.bitset]).

to read

Several types defined in clauses lib.language.support through lib.input.output and Annex D are bitmask types. Each bitmask type can be implemented as an enumerated type that overloads certain operators, as an integer type, or as a bitset (lib.template.bitset).

Additionally, change the definition in 22.2.1 to adopt the same convention as in clause 27 by replacing the existing text with the following (note, in particluar, the cross-reference to 17.3.2.1.2 in 22.2.1, p1):

22.2.1 The ctype category [lib.category.ctype]

namespace std {
    class ctype_base {
    public:
        typedef T mask;

        // numeric values are for exposition only.
        static const mask space = 1 << 0;
        static const mask print = 1 << 1;
        static const mask cntrl = 1 << 2;
        static const mask upper = 1 << 3;
        static const mask lower = 1 << 4;
        static const mask alpha = 1 << 5;
        static const mask digit = 1 << 6;
        static const mask punct = 1 << 7;
        static const mask xdigit = 1 << 8;
        static const mask alnum = alpha | digit;
        static const mask graph = alnum | punct;
    };
}

The type mask is a bitmask type (17.3.2.1.2 [lib.bitmask.types]).


340. interpretation of has_facet<Facet>(loc)

Section: 22.1.1.1.1 [lib.locale.category]  Status: Review  Submitter: Martin Sebor  Date: 18 Sep 2001

It's unclear whether 22.1.1.1.1, p3 says that has_facet<Facet>(loc) returns true for any Facet from Table 51 or whether it includes Table 52 as well:

For any locale loc either constructed, or returned by locale::classic(), and any facet Facet that is a member of a standard category, has_facet<Facet>(loc) is true. Each locale member function which takes a locale::category argument operates on the corresponding set of facets.

It seems that it comes down to which facets are considered to be members of a standard category. Intuitively, I would classify all the facets in Table 52 as members of their respective standard categories, but there are an unbounded set of them...

The paragraph implies that, for instance, has_facet<num_put<C, OutputIterator> >(loc) must always return true. I don't think that's possible. If it were, then use_facet<num_put<C, OutputIterator> >(loc) would have to return a reference to a distinct object for each valid specialization of num_put<C, OutputIteratory>, which is clearly impossible.

On the other hand, if none of the facets in Table 52 is a member of a standard category then none of the locale member functions that operate on entire categories of facets will work properly.

It seems that what p3 should mention that it's required (permitted?) to hold only for specializations of Facet from Table 52 on C from the set { char, wchar_t }, and InputIterator and OutputIterator from the set of { {i,o}streambuf_iterator<{char,wchar_t}> }.

Proposed resolution:

In 22.1.1.1.1 [lib.locale.category], paragraph 3, change "that is a member of a standard category" to "shown in Table 51".

Rationale:

The facets in Table 52 are an unbounded set. Locales should not be required to contain an infinite number of facets.

It's not necessary to talk about which values of InputIterator and OutputIterator must be supported. Table 51 already contains a complete list of the ones we need.


341. Vector reallocation and swap

Section: 23.2.4.2 [lib.vector.capacity]  Status: Review  Submitter: Anthony Williams  Date: 27 Sep 2001

It is a common idiom to reduce the capacity of a vector by swapping it with an empty one:

  std::vector<SomeType> vec;
  // fill vec with data
  std::vector<SomeType>().swap(vec);
  // vec is now empty, with minimal capacity

However, the wording of 23.2.4.2 [lib.vector.capacity]paragraph 5 prevents the capacity of a vector being reduced, following a call to reserve(). This invalidates the idiom, as swap() is thus prevented from reducing the capacity. The proposed wording for issue 329 does not affect this. Consequently, the example above requires the temporary to be expanded to cater for the contents of vec, and the contents be copied across. This is a linear-time operation.

However, the container requirements state that swap must have constant complexity (23.1 [lib.container.requirements] note to table 65).

This is an important issue, as reallocation affects the validity of references and iterators.

If the wording of 23.2.4.2p5 is taken to be the desired intent, then references and iterators remain valid after a call to swap, if they refer to an element before the new end() of the vector into which they originally pointed, in which case they refer to the element at the same index position. Iterators and references that referred to an element whose index position was beyond the new end of the vector are invalidated.

If the note to table 65 is taken as the desired intent, then there are two possibilities with regard to iterators and references:

  1. All Iterators and references into both vectors are invalidated.
  2. Iterators and references into either vector remain valid, and remain pointing to the same element. Consequently iterators and references that referred to one vector now refer to the other, and vice-versa.

Proposed resolution:

Add a new paragraph after 23.2.4.2 [lib.vector.capacity] paragraph 5:

  void swap(vector<T,Allocator>& x);

Effects: Exchanges the contents and capacity() of *this with that of x.

Complexity: Constant time.

[This solves the problem reported for this issue. We may also have a problem with a circular definition of swap() for other containers.]

Rationale:

swap should be constant time. The clear intent is that it should just do pointer twiddling, and that it should exchange all properties of the two vectors, including their reallocation guarantees. ay be useful.


342. seek and eofbit

Section: 27.6.1.3 [lib.istream.unformatted]  Status: Open  Submitter: Howard Hinnant  Date: 09 Oct 201

I think we have a defect.

According to lwg issue 60 which is now a dr, the description of seekg in 27.6.1.3 [lib.istream.unformatted] paragraph 38 now looks like:

Behaves as an unformatted input function (as described in 27.6.1.3, paragraph 1), except that it does not count the number of characters extracted and does not affect the value returned by subsequent calls to gcount(). After constructing a sentry object, if fail() != true, executes rdbuf()­>pubseekpos( pos).

And according to lwg issue 243 which is also now a dr, 27.6.1.3, paragraph 1 looks like:

Each unformatted input function begins execution by constructing an object of class sentry with the default argument noskipws (second) argument true. If the sentry object returns true, when converted to a value of type bool, the function endeavors to obtain the requested input. Otherwise, if the sentry constructor exits by throwing an exception or if the sentry object returns false, when converted to a value of type bool, the function returns without attempting to obtain any input. In either case the number of extracted characters is set to 0; unformatted input functions taking a character array of non-zero size as an argument shall also store a null character (using charT()) in the first location of the array. If an exception is thrown during input then ios::badbit is turned on in *this'ss error state. If (exception()&badbit)!= 0 then the exception is rethrown. It also counts the number of characters extracted. If no exception has been thrown it ends by storing the count in a member object and returning the value specified. In any event the sentry object is destroyed before leaving the unformatted input function.

And finally 27.6.1.1.2/5 says this about sentry:

If, after any preparation is completed, is.good() is true, ok_ != false otherwise, ok_ == false.

So although the seekg paragraph says that the operation proceeds if !fail(), the behavior of unformatted functions says the operation proceeds only if good(). The two statements are contradictory when only eofbit is set. I don't think the current text is clear which condition should be respected.

Further discussion from Redmond:

PJP: It doesn't seem quite right to say that seekg is "unformatted". That makes specific claims about sentry that aren't quite appropriate for seeking, which has less fragile failure modes than actual input. If we do really mean that it's unformatted input, it should behave the same way as other unformatted input. On the other hand, "principle of least surprise" is that seeking from EOF ought to be OK.

Dietmar: nothing should depend on eofbit. Eofbit should only be examined by the user to determine why something failed.

[Taken from c++std-lib-8873, c++std-lib-8874, c++std-lib-8876]

Proposed resolution:

[Howard will do a survey to find out if there are any other places where we have a problem, where the difference between fail() and !good() is important.]


345. type tm in <cwchar>

Section: 21.4 [lib.c.strings]  Status: Ready  Submitter: Clark Nelson  Date: 19 Oct 2001

C99, and presumably amendment 1 to C90, specify that <wchar.h> declares struct tm as an incomplete type. However, table 48 in 21.4 [lib.c.strings] does not mention the type tm as being declared in <cwchar>. Is this omission intentional or accidental?

Proposed resolution:

In section 21.4 [lib.c.strings], add "tm" to table 48.


346. Some iterator member functions should be const

Section: 24.1 [lib.iterator.requirements]  Status: Ready  Submitter: Jeremy Siek  Date: 20 Oct 2001

Iterator member functions and operators that do not change the state of the iterator should be defined as const member functions or as functions that take iterators either by const reference or by value. The standard does not explicitly state which functions should be const. Since this a fairly common mistake, the following changes are suggested to make this explicit.

The tables almost indicate constness properly through naming: r for non-const and a,b for const iterators. The following changes make this more explicit and also fix a couple problems.

Proposed resolution:

In 24.1 [lib.iterator.requirements] Change the first section of p9 from "In the following sections, a and b denote values of X..." to "In the following sections, a and b denote values of type const X...".

In Table 73, change

    a->m   U&         ...

to

    a->m   const U&   ...
    r->m   U&         ...

In Table 73 expression column, change

    *a = t

to

    *r = t

[Redmond: The container requirements should be reviewed to see if the same problem appears there.]


347. locale::category and bitmask requirements

Section: 22.1.1.1.1 [lib.locale.category]  Status: New  Submitter: P.J. Plauger, Nathan Myers  Date: 23 Oct 2001

In 22.1.1.1.1 [lib.locale.category] paragraph 1, the category members are described as bitmask elements. In fact, the bitmask requirements in 17.3.2.1.2 [lib.bitmask.types] don't seem quite right: none and all are bitmask constants, not bitmask elements.

In particular, the requirements for none interact poorly with the requirement that the LC_* constants from the C library must be recognizable as C++ locale category constants. LC_* values should not be mixed with these values to make category values.

We have two options for the proposed resolution. Informally: option 1 removes the requirement that LC_* values be recognized as category arguments. Option 2 changes the category type so that this requirement is implementable, by allowing none to be some value such as 0x1000 instead of 0.

Nathan writes: "I believe my proposed resolution [Option 2] merely re-expresses the status quo more clearly, without introducing any changes beyond resolving the DR.

Proposed resolution:

Option 1:
Replace the first two paragraphs of 22.1.1.1 [lib.locale.types] with:

    typedef int category;

Valid category values include the locale member bitmask elements collate, ctype, monetary, numeric, time, and messages, each of which represents a single locale category. In addition, locale member bitmask constant none is defined as zero and represents no category. And locale member bitmask constant all is defined such that the expression

    (collate | ctype | monetary | numeric | time | messages | all) == all

is true, and represents the union of all categories. Further the expression (X | Y), where X and Y each represent a single category, represents the union of the two categories.

locale member functions expecting a category argument require one of the category values defined above, or the union of two or more such values. Such a category argument identifies a set of locale categories. Each locale category, in turn, identifies a set of locale facets, including at least those shown in Table 51:

Option 2:
Replace the first paragraph of 22.1.1.1 [lib.locale.types] with:

Valid category values include the enumerated values. In addition, the result of applying commutative operators | and & to any two valid values is valid, and results in the setwise union and intersection, respectively, of the argument categories. The values all and none are defined such that for any valid value cat, the expressions (cat | all == all), (cat & all == cat), (cat | none == cat) and (cat & none == none) are true. For non-equal values cat1 and cat2 of the remaining enumerated values, (cat1 & cat2 == none) is true. For any valid categories cat1 and cat2, the result of (cat1 & ~cat2) is valid, and equals the setwise union of those categories found in cat1 but not found in cat2. [Footnote: it is not required that all equal the setwise union of the other enumerated values; implementations may add extra categories.]


348. Minor issue with std::pair operator<

Section: 20.2.2 [lib.pairs]  Status: New  Submitter: Andy Sawyer  Date: 23 Oct 2001

The current wording of 20.2.2 [lib.pairs] p6 precludes the use of operator< on any pair type which contains a pointer.

Proposed resolution:

In 20.2.2 [lib.pairs] paragraph 6, replace:

    Returns: x.first < y.first || (!(y.first < x.first) && x.second <
        y.second).

With:

    Returns: std::less<T1>()( x.first, y.first ) ||
             (!std::less<T1>()( y.first, x.first) && 
             std::less<T2>()( x.second, y.second ) )

349. Minor typographical error in ostream_iterator

Section: 24.5.2 [lib.ostream.iterator]  Status: New  Submitter: Andy Sawyer  Date: 24 Oct 2001

24.5.2 [lib.ostream.iterator] states:

    [...]

    private:
    // basic_ostream<charT,traits>* out_stream; exposition only
    // const char* delim; exposition only

Whilst it's clearly marked "exposition only", I suspect 'delim' should be of type 'const charT*'.

Proposed resolution:

In 24.5.2 [lib.ostream.iterator], replace const char* delim with const charT* delim.


350. allocator<>::address

Section: 20.4.1.1 [lib.allocator.members], 20.1.5 [lib.allocator.requirements], 17.4.1.1 [lib.contents]  Status: New  Submitter: Nathan Myers  Date: 25 Oct 2001

See c++std-lib-9006 and c++std-lib-9007. This issue is taken verbatim from -9007.

The core language feature allowing definition of operator&() applied to any non-builtin type makes that operator often unsafe to use in implementing libraries, including the Standard Library. The result is that many library facilities fail for legal user code, such as the fragment

  class A { private: A* operator&(); };
  std::vector<A> aa;

  class B { };
  B* operator&(B&) { return 0; }
  std::vector<B> ba;

In particular, the requirements table for Allocator (Table 32) specifies no semantics at all for member address(), and allocator<>::address is defined in terms of unadorned operator &.

Proposed resolution:

In 20.4.1.1, Change the definition of allocator<>::address from:

Returns: &x

to:

Returns: The value that the built in operator&(x) would return if not overloaded.

In 20.1.5, Table 32, add to the Notes column of the a.address(r) and a.address(s) lines, respectively:

  allocator<T>::address(r)
  allocator<T>::address(s)

In addition, in clause 17.4.1.1, add a statement:

The Standard Library does not apply operator& to any type for which operator& may be overloaded.

Rationale:

The obvious implementations for std::allocator<>::address are

   T* reinterpret_cast<T*>(&static_cast<char&>(o));

and

   T const* reinterpret_cast<T const*>(&static_cast<char const&>(o));

but to define them formally in terms of reinterpret_cast<> seems to introduce semantic difficulties best avoided. Using a.address() should not introduce unspecified or implementation-defined semantics into a user program.

----- End of document -----