The byte order mark (BOM) is a Unicode character used to signal the endianness (byte order) of a text file or stream. Its code point is U+FEFF. BOM use is optional, and, if used, should appear at the start of the text stream. Beyond its specific use as a byte-order indicator, the BOM character may also indicate which of the several Unicode representations the text is encoded in.[1]

Because Unicode can be encoded as 16-bit or 32-bit integers, a computer receiving these encodings from arbitrary sources needs to know which byte order the integers are encoded in. The BOM gives the producer of the text a way to describe the text stream's endianness to the consumer of the text without requiring some contract or metadata outside of the text stream itself. Once the receiving computer has consumed the text stream, it presumably processes the characters in its own native byte order and no longer needs the BOM. Hence the need for a BOM arises in the context of text interchange, rather than in normal text processing within a closed environment.

Contents

Usage[link]

If the BOM character appears in the middle of a data stream, Unicode says it should be interpreted as a "zero-width non-breaking space" (essentially a null character). In Unicode 3.2, this usage is deprecated in favour of the "Word Joiner" character, U+2060.[1] This allows U+FEFF to be only used as a BOM.

UTF-8[link]

The UTF-8 representation of the BOM is the byte sequence 0xEF,0xBB,0xBF. A text editor or web browser interpreting the text as ISO-8859-1 or CP1252 will display the characters  for this.

The Unicode Standard does permit the BOM in UTF-8,[2] but does not require or recommend its use.[3] Byte order has no meaning in UTF-8[4] so in UTF-8 the BOM serves only to identify a text stream or file as UTF-8.

One reason the UTF-8 BOM is not recommended is that pieces of software without Unicode support may accept UTF-8 bytes at certain points inside a text but not at the start of a text. For instance, the bytes of UTF-8 can be placed between the quotes of string constants in source files of many programming languages, and when executed the program will write the correct UTF-8 to a file or to a display, despite the language not knowing anything about UTF-8. This provides an easy migration path to convert systems to Unicode and to remove all legacy encodings, without simultaneously upgrading the programming language. The unexpected three bytes of the BOM break this however, as they are located at the start of the source file, where they are certain to be a syntax error.

A leading BOM can also defeat software that uses pattern matching on the start of a text file, since it inserts 3 bytes before the pattern. Though commonly associated with the Unix shebang at the start of an interpreted script,[5] the problem is more widespread. For instance in PHP, the existence of a BOM will cause the page to begin output before the initial code is interpreted, causing problems if the page is trying to send custom HTTP headers (which must be set before output begins).

Some common programs from Microsoft, such as Notepad and Visual C++,[6] add BOMs to UTF-8 files by default. Google Docs adds a BOM when a Microsoft Word document is downloaded as a .txt file.

UTF-16[link]

In UTF-16, a BOM (U+FEFF) may be placed as the first character of a file or character stream to indicate the endianness (byte order) of all the 16-bit code units of the file or stream.

  • If the 16-bit units are represented in big-endian byte order, this BOM character will appear in the sequence of bytes as 0xFE followed by 0xFF. This sequence appears as the ISO-8859-1 characters þÿ in a text display that expects the text to be ISO-8859-1, although UTF-16 text will be more or less unreadable if the text display doesn't support UTF-16.
  • if the 16-bit units use little-endian order, the sequence of bytes will have 0xFF followed by 0xFE. This sequence appears as the ISO-8859-1 characters ÿþ in a text display that expects the text to be ISO-8859-1.

Programs expecting UTF-8 may show these or error indicators, depending on how they handle UTF-8 encoding errors. In all cases they will probably display the rest of the file as garbage (a UTF-16 text containing ASCII only will be fairly readable).

For the IANA registered charsets UTF-16BE and UTF-16LE, a byte order mark should not be used because the names of these character sets already determine the byte order. If encountered anywhere in such a text stream, U+FEFF is to be interpreted as a "zero width no-break space".

Clause D98 of conformance (section 3.10) of the Unicode standard states, "The UTF-16 encoding scheme may or may not begin with a BOM. However, when there is no BOM, and in the absence of a higher-level protocol, the byte order of the UTF-16 encoding scheme is big-endian." Whether or not a higher-level protocol is in force is open to interpretation. Files local to a computer for which the native byte ordering is little-endian, for example, might be argued to be encoded as UTF-16LE implicitly. Therefore the presumption of big-endian is widely ignored. When those same files are accessible on the Internet, on the other hand, no such presumption can be made. Searching for ASCII characters or just the space character (U+0020) is a method of determining the UTF-16 byte order.

UTF-32[link]

Although a BOM could be used with UTF-32, this encoding is rarely used for transmission. Otherwise the same rules as for UTF-16 are applicable.

Representations of byte order marks by encoding[link]

Encoding Representation (hexadecimal) Representation (decimal) Representation (ISO-8859-1)
UTF-8[t 1] EF BB BF 239 187 191 
UTF-16 (BE) FE FF 254 255 þÿ
UTF-16 (LE) FF FE 255 254 ÿþ
UTF-32 (BE) 00 00 FE FF 0 0 254 255 □□þÿ (□ is the ascii null character)
UTF-32 (LE) FF FE 00 00 255 254 0 0 ÿþ□□ (□ is the ascii null character)
UTF-7[t 1] 2B 2F 76 38
2B 2F 76 39
2B 2F 76 2B
2B 2F 76 2F
[t 2]
43 47 118 56
43 47 118 57
43 47 118 43
43 47 118 47
+/v8
+/v9
+/v+
+/v/
UTF-1[t 1] F7 64 4C 247 100 76 ÷dL
UTF-EBCDIC[t 1] DD 73 66 73 221 115 102 115 Ýsfs
SCSU[t 1] 0E FE FF[t 3] 14 254 255 □þÿ (□ is the ascii "shift out" character)
BOCU-1[t 1] FB EE 28 251 238 40 ûî(
GB-18030[t 1] 84 31 95 33 132 49 149 51 □1■3 (□ and ■ are unmapped ISO-8859-1 characters)
  1. ^ a b c d e f g This is not really a "byte order" mark, since the byte is also the code unit in these encodings and there is no byte order to resolve. The sequence can be used to identify the encoding, however.[4][7]
  2. ^ In UTF-7, the fourth byte of the BOM, before encoding as base64, is 001111xx in binary, and xx depends on the next character (the first character after the BOM). Hence, technically, the fourth byte is not purely a part of the BOM, but also contains information about the next (non-BOM) character. For xx=00, 01, 10, 11, this byte is, respectively, 38, 39, 2B, or 2F when encoded as base64. If no following character is encoded, 38 is used for the fourth byte and the following byte is 2D.
  3. ^ SCSU allows other encodings of U+FEFF, the shown form is the signature recommended in UTR #6.[8]

See also[link]

References[link]

  1. ^ a b Unicode FAQ: UTF-8, UTF-16, UTF-32 & BOM
  2. ^ "The Unicode Standard 5.0, Chapter 2:General Structure" (PDF). pp. 36. http://www.unicode.org/versions/Unicode5.0.0/ch02.pdf. Retrieved 2009-03-29. "Table 2-4. The Seven Unicode Encoding Schemes" 
  3. ^ "The Unicode Standard 5.0, Chapter 2:General Structure" (PDF). pp. 36. http://www.unicode.org/versions/Unicode5.0.0/ch02.pdf. Retrieved 2008-11-30. "Use of a BOM is neither required nor recommended for UTF-8, but may be encountered in contexts where UTF-8 data is converted from other encoding forms that use a BOM or where the BOM is used as a UTF-8 signature" 
  4. ^ a b "FAQ - UTF-8, UTF-16, UTF-32 & BOM: Can a UTF-8 data stream contain the BOM character (in UTF-8 form)? If yes, then can I still assume the remaining UTF-8 bytes are in big-endian order?". http://unicode.org/faq/utf_bom.html#bom5. Retrieved 2009-01-04. 
  5. ^ Markus Kuhn (2007). "UTF-8 and Unicode FAQ for Unix/Linux: What different encodings are there?". http://www.cl.cam.ac.uk/~mgk25/unicode.html#ucsutf. Retrieved 20 January 2009. "Adding a UTF-8 signature at the start of a file would interfere with many established conventions such as the kernel looking for “#!” at the beginning of a plaintext executable to locate the appropriate interpreter." 
  6. ^ Alf P. Steinbach (2011). "Unicode part 1: Windows console i/o approaches". http://alfps.wordpress.com/2011/11/22/unicode-part-1-windows-console-io-approaches/. Retrieved 24 March 2012. "However, since the C++ source code was encoded as UTF-8 without BOM (as is usual in Linux), the Visual C++ compiler erroneously assumed that the source code was encoded as Windows ANSI." 
  7. ^ STD 63: UTF-8, a transformation of ISO 10646 Byte Order Mark (BOM)
  8. ^ UTR #6: Signature Byte Sequence for SCSU

External links[link]

http://wn.com/Byte_order_mark




This page contains text from Wikipedia, the Free Encyclopedia - http://en.wikipedia.org/wiki/Byte_order_mark

This article is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, which means that you can copy and modify it as long as the entire work (including additions) remains under this license.


Text file created with gedit and viewed with a hex editor
Besides the text objects there are only EOL markers
with the hexadecimal value 0A.

In computing, a newline,[1] also known as a line break or end-of-line (EOL) marker, is a special character or sequence of characters signifying the end of a line of text. The name comes from the fact that the next character after the newline will appear on a new line—that is, on the next line below the text immediately preceding the newline. The actual codes representing a newline vary across operating systems, which can be a problem when exchanging text files between systems with different newline representations.

There is also some confusion whether newlines terminate or separate lines. If a newline is considered a separator, there will be no newline after the last line of a file. The general convention on most systems is to add a newline even after the last line, i.e. to treat newline as a line terminator. Some programs have problems processing the last line of a file if it is not newline terminated. Conversely, programs that expect newline to be used as a separator will interpret a final newline as starting a new (empty) line.

In text intended primarily to be read by humans using software which implements the word wrap feature, a newline character typically only needs to be stored if a line break is required independent of whether the next word would fit on the same line, such as between paragraphs and in vertical lists. See hard return and soft return.

Contents

Representations[link]

Software applications and operating systems usually represent a newline with one or two control characters:

  • Systems based on ASCII or a compatible character set use either LF (Line feed, '\n', 0x0A, 10 in decimal) or CR (Carriage return, '\r', 0x0D, 13 in decimal) individually, or CR followed by LF (CR+LF, '\r\n', 0x0D0A). These characters are based on printer commands: The line feed indicated that one line of paper should feed out of the printer thus instructed the printer to advance the paper one line, and a carriage return indicated that the printer carriage should return to the beginning of the current line. Some rare systems, such as QNX before version 4, used the ASCII RS (record separator, 0x1E, 30 in decimal) character as the newline character.
  • EBCDIC systems—mainly IBM mainframe systems, including z/OS (OS/390) and i5/OS (OS/400)—use NEL (Next Line, 0x15) as the newline character. Note that EBCDIC also has control characters called CR and LF, but the numerical value of LF (0x25) differs from the one used by ASCII (0x0A). Additionally, there are some EBCDIC variants that also use NEL but assign a different numeric code to the character.
  • Operating systems for the CDC 6000 series defined a newline as two or more zero-valued six-bit characters at the end of a 60-bit word. Some configurations also defined a zero-valued character as a colon character, with the result that multiple colons could be interpreted as a newline depending on position.
  • ZX80 and ZX81, home computers from Sinclair Research Ltd used a specific non-ASCII character set with code NEWLINE (0x76, 118 decimal) as the newline character.
  • OpenVMS uses a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are actually stored, but the Record Management Services facility can transparently add a terminator to each line when it is retrieved by an application. The records themselves could contain the same line terminator characters, which could either be considered a feature or a nuisance depending on the application.
  • Fixed line length was used by some early mainframe operating systems. In such a system, an implicit end-of-line was assumed every 80 characters, for example. No newline character was stored. If a file was imported from the outside world, lines shorter than the line length had to be padded with spaces, while lines longer than the line length had to be truncated. This mimicked the use of punched cards, on which each line was stored on a separate card, usually with 80 columns on each card. Many of these systems added an carriage control character to the start of the next record, this could indicate if the next record was a continuation of the line started by the previous record, or a new line, or should overprint the previous line (similar to a CR). Often this was a normal printing character such as '#' that thus could not be used as the first character in a line. Some early line printers interpreted these characters directly in the records sent to them.

Most textual Internet protocols (including HTTP, SMTP, FTP, IRC and many others) mandate the use of ASCII CR+LF (0x0D 0x0A) on the protocol level, but recommend that tolerant applications recognize lone LF as well. In practice, there are many applications that erroneously use the C newline character '\n' instead (see section Newline in programming languages below). This leads to problems when trying to communicate with systems adhering to a stricter interpretation of the standards; one such system is the qmail MTA that actively refuses to accept messages from systems that send bare LF instead of the required CR+LF.[2]

FTP has a feature to transform newlines between CR+LF and LF only when transferring text files. This must not be used on binary files. Usually binary files and text files are recognised by checking their filename extension.

Unicode[link]

The Unicode standard defines a large number of characters that conforming applications should recognize as line terminators:[3]

 LF:    Line Feed, U+000A
 VT:    Vertical Tab, U+000B
 FF:    Form Feed, U+000C
 CR:    Carriage Return, U+000D
 CR+LF: CR (U+000D) followed by LF (U+000A)
 NEL:   Next Line, U+0085
 LS:    Line Separator, U+2028
 PS:    Paragraph Separator, U+2029

This may seem overly complicated compared to an approach such as converting all line terminators to a single character, for example LF. However, Unicode was designed to preserve all information when converting a text file from any existing encoding to Unicode and back. Therefore, Unicode should contain characters included in existing encodings. NEL is included in ISO-8859-1[citation needed] and EBCDIC (0x15). The approach taken in the Unicode standard allows round-trip transformation to be information-preserving while still enabling applications to recognize all possible types of line terminators.

Recognizing and using the newline codes greater than 0x7F is not often done. They are multiple bytes in UTF-8 and the code for NEL has been used as the ellipsis ('…') character in Windows-1252. For instance:

  • YAML[4] no longer recognizes them as special in order to be compatible with JSON.
  • ECMAScript[5] accepts LS and PS as line breaks, but considers U+0085 (NEL) white space, not a line break.
  • Microsoft Windows 2000 does not treat any of NEL, LS or PS as line-break in the default text editor Notepad
  • In Linux, a popular editor "gedit" treats LS and PS as newlines but does not for NEL.

History[link]

ASCII was developed simultaneously by the ISO and the ASA, the predecessor organization to ANSI. During the period of 1963–1968, the ISO draft standards supported the use of either CR+LF or LF alone as a newline, while the ASA drafts supported only CR+LF.

The sequence CR+LF was in common use on many early computer systems that had adopted Teletype machines, typically a Teletype Model 33 ASR, as a console device, because this sequence was required to position those printers at the start of a new line. On these systems, text was often routinely composed to be compatible with these printers, since the concept of device drivers hiding such hardware details from the application was not yet well developed; applications had to talk directly to the Teletype machine and follow its conventions.

Most minicomputer systems from DEC used this convention. CP/M used it as well, to print on the same terminals that minicomputers used. From there MS-DOS (1981) adopted CP/M's CR+LF in order to be compatible, and this convention was inherited by Microsoft's later Windows operating system.

The separation of the two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the CR first. In fact, it was often necessary to send extra characters (extraneous CRs or NULs, which are ignored) to give the print head time to move to the left margin. Even many early video displays required multiple character times to scroll the display.

The Multics operating system began development in 1964 and used LF alone as its newline. Multics used a device driver to translate this character to whatever sequence a printer needed (including extra padding characters), and the single byte was much more convenient for programming. The seemingly more obvious choice of CR was not used, as a plain CR provided the useful function of overprinting one line with another, and thus it was useful to not translate it. Unix followed the Multics practice, and later systems followed Unix.

In programming languages[link]

To facilitate the creation of portable programs, programming languages provide some abstractions to deal with the different types of newline sequences used in different environments.

The C programming language provides the escape sequences '\n' (newline) and '\r' (carriage return). However, these are not required to be equivalent to the ASCII LF and CR control characters. The C standard only guarantees two things:

  1. Each of these escape sequences maps to a unique implementation-defined number that can be stored in a single char value.
  2. When writing a file in text mode, '\n' is transparently translated to the native newline sequence used by the system, which may be longer than one character. When reading in text mode, the native newline sequence is translated back to '\n'. In binary mode, no translation is performed, and the internal representation produced by '\n' is output directly.

On Unix platforms, where C originated, the native newline sequence is ASCII LF (0x0A), so '\n' was simply defined to be that value. With the internal and external representation being identical, the translation performed in text mode is a no-op, and text mode and binary mode behave the same. This has caused many programmers who developed their software on Unix systems simply to ignore the distinction completely, resulting in code that is not portable to different platforms.

The C library function fgets() is best avoided in binary mode because any file not written with the UNIX newline convention will be misread. Also, in text mode, any file not written with the system's native newline sequence (such as a file created on a UNIX system, then copied to a Windows system) will be misread as well.

Another common problem is the use of '\n' when communicating using an Internet protocol that mandates the use of ASCII CR+LF for ending lines. Writing '\n' to a text mode stream works correctly on Windows systems, but produces only LF on Unix, and something completely different on more exotic systems. Using "\r\n" in binary mode is slightly better.

Many languages, such as C++, Perl,[6] and Haskell provide the same interpretation of '\n' as C.

Java, PHP,[7] and Python[8] provide the '\r\n' sequence (for ASCII CR+LF). In contrast to C, these are guaranteed to represent the values U+000A and U+000D, respectively.

The Java I/O libraries do not transparently translate these into platform-dependent newline sequences on input or output. Instead, they provide functions for writing a full line that automatically add the native newline sequence, and functions for reading lines that accept any of CR, LF, or CR+LF as a line terminator (see BufferedReader.readLine()). The System.getProperty() method can be used to retrieve the underlying line separator.

Example:

  String eol = System.getProperty( "line.separator" );
  String lineColor = "Color: Red" + eol;

Python permits "Universal Newline Support" when opening a file for reading, when importing modules, and when executing a file.[9]

Some languages have created special variables, constants, and subroutines to facilitate newlines during program execution.

Common problems[link]

The different newline conventions often cause text files that have been transferred between systems of different types to be displayed incorrectly. For example, files originating on Unix or Apple Macintosh systems may appear as a single long line on some Windows programs. Conversely, when viewing a file originating from a Windows computer on a Unix system, the extra CR may be displayed as ^M at the end of each line or as a second line break.

The problem can be hard to spot if some programs handle the foreign newlines properly while others do not. For example, a compiler may fail with obscure syntax errors even though the source file looks correct when displayed on the console or in an editor. On a Unix system, the command cat -v myfile.txt will send the file to stdout (normally the terminal) and make the ^M visible, which can be useful for debugging. Modern text editors generally recognize all flavours of CR / LF newlines and allow the user to convert between the different standards. Web browsers are usually also capable of displaying text files and websites which use different types of newlines.

The File Transfer Protocol can automatically convert newlines in files being transferred between systems with different newline representations when the transfer is done in "ASCII mode". However, transferring binary files in this mode usually has disastrous results: Any occurrence of the newline byte sequence—which does not have line terminator semantics in this context, but is just part of a normal sequence of bytes—will be translated to whatever newline representation the other system uses, effectively corrupting the file. FTP clients often employ some heuristics (for example, inspection of filename extensions) to automatically select either binary or ASCII mode, but in the end it is up to the user to make sure his or her files are transferred in the correct mode. If there is any doubt as to the correct mode, binary mode should be used, as then no files will be altered by FTP, though they may display incorrectly.

Conversion utilities[link]

Text editors are often used for converting a text file between different newline formats; most modern editors can read and write files using at least the different ASCII CR/LF conventions. The standard Windows editor Notepad is not one of them (although Wordpad and the MS-DOS Editor are).

Editors are often unsuitable for converting larger files. For larger files (on Windows NT/2000/XP) the following command is often used:

TYPE unix_file | FIND "" /V > dos_file

On many Unix systems, the dos2unix (sometimes named fromdos or d2u) and unix2dos (sometimes named todos or u2d) utilities are used to translate between ASCII CR+LF (DOS/Windows) and LF (Unix) newlines. Different versions of these commands vary slightly in their syntax. However, the tr command is available on virtually every Unix-like system and is used to perform arbitrary replacement operations on single characters. A DOS/Windows text file can be converted to Unix format by simply removing all ASCII CR characters with

tr -d '\r' < inputfile > outputfile

or, if the text has only CR newlines, by converting all CR newlines to LF with

tr '\r' '\n' < inputfile > outputfile

The same tasks are sometimes performed with awk, sed, Tr_(Unix) or in Perl if the platform has a Perl interpreter:

awk '{sub("$","\r\n"); printf("%s",$0);}' inputfile > outputfile  # UNIX to DOS  (adding CRs on Linux and BSD based OS that haven't GNU extensions)
awk '{gsub("\r",""); print;}' inputfile > outputfile              # DOS to UNIX  (removing CRs on Linux and BSD based OS that haven't GNU extensions)
sed -e 's/$/\r/' inputfile > outputfile              # UNIX to DOS  (adding CRs on Linux based OS that use GNU extensions)
sed -e 's/\r$//' inputfile > outputfile              # DOS  to UNIX (removing CRs on Linux based OS that use GNU extensions)
cat inputfile | tr -d "\r" > outputfile              # DOS  to UNIX (removing CRs using tr(1). Not Unicode compliant.)
perl -pe 's/\r?\n|\r/\r\n/g' inputfile > outputfile  # Convert to DOS
perl -pe 's/\r?\n|\r/\n/g'   inputfile > outputfile  # Convert to UNIX
perl -pe 's/\r?\n|\r/\r/g'   inputfile > outputfile  # Convert to old Mac

To identify what type of line breaks a text file contains, the file command can be used. Moreover, the editor Vim can be convenient to make a file compatible with the Windows notepad text editor. For example:

[prompt] > file myfile.txt
myfile.txt: ASCII English text
[prompt] > vim myfile.txt
  within vim :set fileformat=dos
             :wq
[prompt] > file myfile.txt
myfile.txt: ASCII English text, with CRLF line terminators

The following grep commands echo the filename (in this case myfile.txt) to the command line if the file is of the specified style:

grep -PL $'\r\n' myfile.txt # show UNIX style file (LF terminated)
grep -Pl $'\r\n' myfile.txt # show DOS style file (CRLF terminated)

For Debian-based systems, these commands are used:

egrep -L $'\r\n' myfile.txt # show UNIX style file (LF terminated)
egrep -l $'\r\n' myfile.txt # show DOS style file (CRLF terminated)

The above grep commands work under Unix systems or in Cygwin under Windows. Note that these commands make some assumptions about the kinds of files that exist on the system (specifically it's assuming only UNIX and DOS-style files—no Mac OS 9-style files).

This technique is often combined with find to list files recursively. For instance, the following command checks all "regular files" (e.g. it will exclude directories, symbolic links, etc.) to find all UNIX-style files in a directory tree, starting from the current directory (.), and saves the results in file unix_files.txt, overwriting it if the file already exists:

find . -type f -exec grep -PL '\r\n' {} \; > unix_files.txt

This example will find C files and convert them to LF style line endings:

find -name '*.[ch]' -exec fromdos {} \;

The file command also detects the type of EOL used:

file myfile.txt
> myfile.txt: ASCII text, with CRLF line terminators

Other tools permit the user to visualise the EOL characters:

od -a myfile.txt
cat -e myfile.txt
hexdump -c myfile.txt

dos2unix, unix2dos, mac2unix, unix2mac, mac2dos, dos2mac can perform conversions. The flip[10] command is often used.

See also[link]

References[link]

  1. ^ The origin of the older computer term "CRLF" - which redirects to this Newline article - or "Carriage Return [and] Line Feed", derives from standard manual typewriter design, whereby at the end of a line of text the typist pushes a lever at the left end of the carriage to return it to position for beginning the next line. In so doing, a mechanism also rolls the typewriter's platen by one line, advancing ("feeding") the paper to the correct position.
  2. ^ cr.yp.to
  3. ^ UTR #13: Unicode Newline Guidelines
  4. ^ YAML Ain't Markup Language (YAML™) Version 1.2
  5. ^ "ECMAScript Language Specification 5th edition". ECMA International. December 2009. p. 15. http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf. Retrieved 4 April 2010. 
  6. ^ binmode - perldoc.perl.org
  7. ^ PHP: Strings - Manual
  8. ^ Lexical analysis – Python v3.0.1 documentation
  9. ^ What's new in Python 2.3
  10. ^ ASCII text converstion between UNIX, Macintosh, MS-DOS

External links[link]

http://wn.com/Newline




This page contains text from Wikipedia, the Free Encyclopedia - http://en.wikipedia.org/wiki/Newline

This article is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, which means that you can copy and modify it as long as the entire work (including additions) remains under this license.