opencsv is an easy-to-use CSV (comma-separated values) parser library for Java. It was developed because all the CSV parsers at the time didn't have commercial-friendly licenses. Java 7 is currently the minimum supported version.


opencsv supports all the basic CSV-type things you're likely to want to do:

  • Arbitrary numbers of values per line.
  • Ignoring commas in quoted elements.
  • Handling quoted entries with embedded carriage returns (i.e. entries that span multiple lines).
  • Configurable separator and quote characters (or use sensible defaults).

All of these things can be done reading and writing, using a manifest of maleable methodologies:

  • To and from an array of strings.
  • To and from annotated beans.
  • From a database
  • Read all the entries at once, or use an Iterator-style model

Developer documentation

Here is an overview of how to use opencsv in your project.

  • Quick start
  • Upgrading from 3.x to 4.x
  • Core concepts
    • Configuration
    • Error handling
    • Annotations
  • Reading
    • Parsing
    • Reading into an array of strings
    • Reading into beans
  • Writing
    • Writing from an array of strings
    • Writing from a list of beans
    • From a database table
  • Nuts and bolts
    • Flow of data through opencsv
    • Mapping strategies

Once you have absorbed the overview of how opencsv works, please consult the well-maintained Javadocs for further details.

Quick start

This is limited to the easiest, most powerful way of using opencsv to allow you to hit the ground running.

For reading, create a bean to harbor the information you want to read, annotate the bean fields with the opencsv annotations, then do this:

     List<MyBean> beans = new CsvToBeanBuilder(FileReader("yourfile.csv"))

For writing, create a bean to harbor the information you want to write, annotate the bean fields with the opencsv annotations, then do this:

     // List<MyBean> beans comes from somewhere earlier in your code.
     Writer writer = new FileWriter("yourfile.csv");
     StatefulBeanToCsvBuilder beanToCsv = StatefulBeanToCsvBuilder(writer).build();

Upgrading from 3.x to 4.x

4.0 is a major release because it breaks backward compatibility. What do you get for that? Here is a list of the improvements in opencsv 4.0.

  • We have rewritten the bean code to be multi-threaded so that reading from an input directly into beans is significantly faster. Performance benefits depend largely on your data and hardware, but our non-rigorous tests indicate that reading now takes a third of the time it used to.
  • We have rewritten the bean code to be multi-threaded so that writing from a list of beans is significantly faster. Performance benefits depend largely on your data and hardware, but our non-rigorous tests indicate that writing now takes half of the time it used to.
  • There is a new iterator available for iterating through the input into beans. This iterator is consistent in every way with the behavior of the code that reads all data sets at once into a list of beans. The old iterator did not support all features, like locales and custom converters.
  • opencsv now supports internationalization for all error messages it produces. The easiest way to benefit from this is to make certain the default locale is the one you want. Otherwise, look for the withErrorLocale() and setErrorLocale() methods in various classes. Localizations are provided for American English and German. Further submissions are welcome, but with a submission you enter into a life-long contract to provide updates for any new messages for the language(s) you submit. If you break this contract, you forefit your soul.
  • Support for national character sets was added to ResultSetHelperService (NClob, NVarchar, NChar, LongNVarchar).

Here are the things you can expect to encounter during an upgrade, and what to do about them.

  • Java 7 is now the minimum supported version. Tough noogies.
  • Everything that was deprecated has been removed.
    • BeanToCsv is no more. Please use StatefulBeanToCsv instead. The quick start guide above gives you an example.
    • @CsvBind was replaced with @CsvBindByName. It really is as simple as search and replace.
    • ConvertGermanToBooleanRequired was removed. Replace it with @CsvCustomBindByName(converter = ConvertGermanToBoolean.class, required = true).
  • In the rare case that you have written your own mapping strategy:
    • MappingStrategy now includes a method verifyLineLength(). If you derive your mapping strategy from one of ours, you're okay. Otherwise, you will have to implement it.
    • In the rare case that you used opencsv 3.10, registerBeginningOfRecordForReading() and registerEndOfRecordForReading() were removed from MappingStrategy. They were the result of thought processes worthy of nothing more accomplished than a drunken monkey. I may write that because I wrote the bad code. If you derived your mapping strategy from one of ours, you're okay. Otherwise, you'll have to remove these methods.
    • findDescriptor no longer includes "throws IntrospectionException" in its method signature. If you had it, you'll have to get rid of it. If you had it an needed it, you'll have to rewrite your code.
    • There are now requirements for thread-safety imposed on certain methods in every mapping strategy. See the Javadoc for MappingStrategy for details.
    • The method setErrorLocale() is now required. If you derive your implementation from one of ours, you're fine. If not, implement it, or make it a no-op.
    • The method setType() is now required. If you derive your implementation from one of ours, you're fine. If not, implement it, or make it a no-op.
  • MappingUtils was really meant to be for internal use, but of course we can't control that, so let it be said that:
    • the class is now named opencsvUtils, because it encompasses more than mapping, and
    • the determineMappingStrategy() method now requires a locale for error messages. Null can be used for the default locale.
  • The constructors for BeanFieldDate and BeanFieldPrimitiveType now require a locale for error messages. This is to avoid a proliferation of constructors or setters. These classes probably ought not to be used in your code directly, and probably ought to be final, but we still thought it best to inform you.
  • The interface BeanField requires the method setErrorLocale(). Assuming you derive all of your BeanField implementations from AbstractBeanField, this does not affect you.

And we have a new list of things that we have deprecated and plan to remove in 5.0, as well as what you can do about it.

  • IterableCSVToBean and IterableCSVToBeanBuilder have both been deprecated. CsvToBean itself is now iterable; use it instead.
  • All constructors except the ones with the smallest (often nullary, using defaults for all values) and largest argument lists (which often have only package access) have been deprecated. The constructors in between have grown over the years as opencsv has added features, and they've become unwieldy. We encourage all of our users to use the builders we provide instead of the constructors.
  • All variants of CsvToBean.parse() except the no-argument variant. Please use the builder we provide.
  • MappingStrategy.findDescriptor() will no longer be necessary in 5.0 because the plan is to move to reflection completely and no longer use introspection.

Core concepts

There are a couple of concepts that most users of opencsv need to understand, and that apply equally to reading an writing.


"CSV" stands for "comma-separated values", but life would be too simple if that were always true. Often the separator is a semicolon. Sometimes the separator character is included in the data for a field itself, so quotation characters are necessary. Those quotation characters could be included in the data also, so an escape character is necessary. All of these configuration options and more are given to the parser or the CSVWriter as necessary. Naturally, it's easier for you to give them to a builder and the builder passes them on to the right class.

Say you're using a tab for your separator, you can do something like this:

     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t');

or for reading with annotations:

     CsvToBean csvToBean = CsvToBeanBuilder(new FileReader("yourfile.csv"))

And if you single-quoted your escaped characters rather than double-quoting them, you can use the three-argument constructor:

     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t', '\'');

or for reading with annotations:

     CsvToBean csvToBean = CsvToBeanBuilder(new FileReader("yourfile.csv"))

Error handling

opencsv uses structured exception handling, including checked and unchecked exceptions. The checked exceptions are typically errors in input data and do not have to impede further parsing. They could occur at any time during normal operation in a production environment. They occur during reading or writing.

The unchecked errors are typically the result of incorrect programming and should not be thrown in a production environment with well-tested code.

opencsv gives you two options for handling the checked exceptions both while reading and while writing. You may either choose to have all exceptions thrown and handle these, or you may choose to have them collected so you can inspect and deal with them after parsing. If you don't have them collected, the first error in the input file will force a cessation of parsing. The default is to throw exceptions.

To change exception handling, simply use CsvToBeanBuilder.withThrowExceptions() for reading and StatefulBeanToCsvBuilder.withThrowExceptions() for writing, then collect the results after data processing with CsvToBean.getCapturedExceptions() for reading and StatefulBeanToCsv.getCapturedExceptions() for writing.


The most powerful mechanism opencsv has for reading and writing CSV files involves defining beans that the fields of the CSV file can be mapped to and from, and annotating the fields of these beans so opencsv can do the rest. In brief, these annotations are:

  • CsvBindByName: Maps a bean field to a field in the CSV file based on the name of the header for that field in the CSV input.
  • CsvBindByPosition: Maps a bean field to a field in the CSV file based on the numerical position of the field in the CSV input.
  • CsvCustomBindByName: The same as CsvBindByName, but must provide its own data conversion class.
  • CsvCustomBindByPosition: The same as CsvBindByPosition, but must provide its own data conversion class.
  • CsvDate: Must be applied to bean fields of date/time types for automatic conversion to work, and must be used in conjunction with one of the preceding four annotations.

As you can infer, there are two strategies for annotating beans, depending on your input:

  • Annotating by header name
  • Annotating by column position

It is possible to annotate bean fields both with header-based and position-based annotations. If you do, position-based annotations take precedence if the mapping strategy is automatically determined. To use the header-based annotations, you would need to instantiate and pass in a HeaderColumnNameMappingStrategy. When might this be useful? Possibly reading two different sources that provide the same data, but one includes headers and the other doesn't. Possibly to convert between headerless input and output with headers. Further use cases are left as an exercise for the reader.

Most of the more detailed documentation on using annotations is in the section on reading data. The use of annotations applies equally well to writing data, though; the annotations define a two-way mapping between bean fields and fields in a CSV file. Writing is then simply reading in reverse.


Most users of opencsv find themselves needing to read CSV files, and opencsv excels at this. But then, opencsv excels at everything. :)


It's unlikely that you will need to concern yourself with exactly how parsing works in opencsv, but documentation wouldn't be documentation if it didn't cover all of the obscure nooks and crannies. So here we go.

Parsers in opencsv implement the interface ICSVParser. You are free to write your own, if you feel the need to. opencsv itself provides two parsers, detailed in the following sections.

Although opencsv attempts to be simple to use for most use cases, and thus tries not to make the choice of a parser obvious, you are still always free to instantiate whichever parser suits your needs and pass it to the builder or reader you are using.


The original, tried and true parser that does fairly well everything you need to do, and does it well. If you don't tell opencsv otherwise, it uses this parser.

The advantage of the CSVParser is that it's highly configurable and has the best chance of parsing "non-standard" CSV data. The disadvantage is that while highly configurable it was found that there were RFC4180 data that it could not parse. Thus the RFC4180Parser was created.


RFC4180 defines a standard for all of the nitty-gritty questions of just precisely how CSV files are to be formatted, delimited, and escaped. Since opencsv predates RFC4180 by a few days and every effort was made to preserve backwards compatibility, it was necessary to write a new parser for full compliance with RFC4180.

The main difference between between the CSVParser and the RFC4180Parser is that the CSVParser uses an escape character to denote "unprintable" characters while the RFC4180 spec takes all characters between the first and last quote as gospel (with the exception of the double quote which is escaped by a double quote).

Reading into an array of strings

At the most basic, you can use opencsv to parse an input and return a String[], thus:

     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"));
     String [] nextLine;
     while ((nextLine = reader.readNext()) != null) {
        // nextLine[] is an array of values from the line
        System.out.println(nextLine[0] + nextLine[1] + "etc...");

One step up is reading all lines of the input file at once into a List<String[]>, thus:

     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"));
     List<String[]> myEntries = reader.readAll();

The last option for getting at an array of strings is to use an iterator:

     CSVIterator iterator = new CSVIterator(new CSVReader(new FileReader("yourfile.csv")));
     for(String[] nextLine : iterator) {
        // nextLine[] is an array of values from the line
        System.out.println(nextLine[0] + nextLine[1] + "etc...");


     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"));
     for(String[] nextLine : reader.iterator()) {
        // nextLine[] is an array of values from the line
        System.out.println(nextLine[0] + nextLine[1] + "etc...");

Reading into beans

Arrays of strings are all good and well, but there are simpler, more modern ways of data processing. Specifically, opencsv can read a CSV file directly into a list of beans. Quite often, that's what we want anyway, to be able to pass the data around and process it as a connected dataset instead of individual fields whose position in an array must be intuited. We shall start with the easiest and most powerful method of reading data into beans, and work our way down to the cogs that offer finer control, for those who have a need for such a thing.

Performance always being one of our top concerns, reading is written to be multi-threaded, which truly speeds the library up by quite a bit. There are two performance choices left in your hands:

  1. Time vs. memory: The classic trade-off. If memory is not a problem, read using CsvToBean.parse(), which will read all beans at once and is multi-threaded. If your memory is limited, use CsvToBean.iterator() and iterate over the input. Only one bean is read at a time, making multi-threading impossible and slowing down reading, but only one object is in memory at a time (assuming you process and release the object for the garbage collector immediately).
  2. Ordered vs. unordered. opencsv preserves the order of the data given to it by default. Maintaining order when using parallel programming requires some extra effort which means extra CPU time. If order does not matter to you, use CsvToBeanBuilder.withOrderedResults(false). The performance benefit is not large, but it is measurable. The ordering or lack thereof applies to data as well as any captured exceptions.

The bean work was begun by Kyle Miller and extended by Tom Squires and Andrew Jones.


By simply defining a bean and annotating the fields, opencsv can do all of the rest. When we write "bean", that's a loose approximation of the requirements. Actually, if you use annotations, opencsv uses reflection (not introspection), so all you need is a POJO (plain old Java object) that does not have to conform to the Java Bean Specification, but is required to be public and have a public nullary constructor. If getters and setters are present and accessible, they are used. Otherwise, opencsv bypasses access control restrictions to get to member variables. This is true for reading and writing.

Besides the basic mapping strategy, there are various mechanisms for processing certain kinds of data.

Annotating by header name

CSV files should have header names for all fields in the file, and these can be used to great advantage. By annotating a bean field with the name of the header whose data should be written in the field, opencsv can do all of the matching and copying for you. This also makes you independent of the order in which the headers occur in the file. For data like this:


you could create the following bean:

     public class Visitors {

     private String firstName;

     private String lastName;

     private int visitsToWebsite;

     // Getters and setters go here.

Here we simply name the fields identically to the header names. After that, reading is a simple job:

     List<Visitors> beans = new CsvToBeanBuilder(FileReader("yourfile.csv"))

This will give you a list of the two beans as defined in the example input file. Note how type conversions to basic data types (wrapped and unwrapped primitives and Strings) occur automatically.

Input can get more complicated, though, and opencsv gives you the tools to deal with that. Let's start with the possibility that the header names can't be mapped to Java field names:

     First name,Last name,1 visit only

In this case, we have spaces in the names and one header with a number as the initial character. Other problems can be encountered, such as international characters in header names. Additionally, we would like to require that at least the name be mandatory. For this case, our bean doesn't look much different:

     public class Visitors {

     @CsvBindByName(column = "First Name", required = true)
     private String firstName;

     @CsvBindByName(column = "Last Name", required = true)
     private String lastName;

     @CsvBindByName(column = "1 visit only")
     private boolean onlyOneVisit;

     // Getters and setters go here.

The code for reading remains unchanged.

Annotating by column position

Not every scribe of CSV files is kind enough to provide header names. This is a no-no, but we're not here to condemn the authors of poor data exports. Our goal is to provide our users with everything they could possibly need to parse CSV files, no matter how bad, as long as they're still logically coherent in some way.

To that end, we have also accounted for the possibility that there are no headers, and data must be divined from column position. We will return to our previous input file sans header names:


The bean for these data would be:

     public class Visitors {

     @CsvBindByPosition(position = 0)
     private String firstName;

     @CsvBindByPosition(position = 1)
     private String lastName;

     @CsvBindByPosition(position = 2)
     private int visitsToWebsite;

     // Getters and setters go here.

Besides that, the annotations behave the same as their header name counterparts.

Locales, dates

We've considered primitives, but we haven't considered more complex yet common data types. We have also not considered locales other than the default locale. Here we shall do both at the same time. Consider this input file:

     username,valid since,annual salary

The dates are dd.MM.yyyy, and the salaries use a dot as the thousands delimiter. For this input we create the following bean:

     public class Employees {

     @CsvBindByName(required = true)
     private String username;

     @CsvBindByName(column = "valid since")
     private Date validSince;

     @CsvBindByName(column = "annual salary", locale = "de")
     private int salary;

     // Getters and setters go here.

The date is handled with the annotation @CsvDate in addition to the mapping annotation. @CsvDate can take a format string, and incidentally handles all common date-type classes. See the Javadocs for more details. The thousands separator in the salaries is dealt with by using the German locale, one of many countries where the thousands separator is a dot.

Custom converters

Now, we know that input data can get very messy, so we have provided our users with the ability to deal with the messiest of data by allowing you to define your own custom converters. Every converter must be derived from AbstractBeanField, must be public, and must have a public nullary constructor. For reading, the convert() method must be overriden. opencsv provides two custom converters in the package com.opencsv.bean.customconverter. These can be useful converters themselves, but they also exist for instructive purposes: If you want to write your own custom converter, look at these for examples of how it's done.

Let's use two as illustrations. Let's say we have the following input file:

     cluster1,node1 node2,wahr
     cluster2,node3 node4 node5,falsch

In this file we have a list of server clusters. The cluster name comes first, followed by a space-delimited list of names of servers in the cluster. The final field indicates whether the cluster is in production use or not, but the truth value uses German. Here is the appropriate bean, using the custom converters opencsv provides:

     public class Cluster {

       private String cluster;

       @CsvCustomBindByName(converter = ConvertSplitOnWhitespace.class)
       private String[] nodes;

       @CsvCustomBindByName(converter = ConvertGermanToBoolean.class)
       private boolean production;

       // Getters and setters go here.

More than that is not necessary. If you need boolean values in other languages, take a gander at the code in ConvertGermanToBoolean; Apache BeanUtils provides a slick way of converting booleans.

The corresponding annotations for custom converters based on column position are also provided.

Reading into beans without annotations

If annotations are anathema to you, you can bypass them with carefully structured data, beans and with somewhat more code. For example, here's how you can map to a bean based on the field positions in your CSV file:

    ColumnPositionMappingStrategy strat = new ColumnPositionMappingStrategy();
    String[] columns = new String[] {"name", "orderNumber", "id"}; // the fields to bind to in your bean

    CsvToBean csv = new CsvToBean();
    List list = csv.parse(strat, yourReader);

Please note, if you do not use annotations, opencsv uses introspection to access member variables, so your objects will have to be honest-to-God beans.

Skipping and filtering

With some input it can be helpful to skip the first few lines. opencsv provides for this need with CsvToBeanBuilder.withSkipLines(), which ultimately is used on the appropriate constructor for CSVReader, if you would prefer to do everything without the use of the builders. This will skip the first few lines of the raw input, not the CSV data, in case some input provides heaven knows what before the first line of CSV data, such as a legal disclaimer or copyright information.

So, for example, you can skip the first two lines by doing:

     CSVReader reader = new CSVReader(new FileReader("yourfile.csv"), '\t', '\'', 2);

or for reading with annotations:

     CsvToBean csvToBean = CsvToBeanBuilder(new FileReader("yourfile.csv"))

Filtering is different in that it works on CSV records and it applies to the whole input. It can also only be used with a bean mapping strategy. To filter input beans, implement CsvToBeanFilter and pass your implementation to CsvToBeanBuilder.withFilter(), or equivalently if you're not using the builders, to the appropriate parse() method from CsvToBean or even setFilter().

Yes, filtering would be much nicer with Java 8 streams, but in order to serve a wide developer base, we won't be using Java 8 for many years to come.


Less often used, but just as comfortable as reading CSV files is writing them. And believe me, a lot of work went into making writing CSV files as comfortable as possible for you, our users.

There are three methods of writing CSV data:

  • Writing from an array of strings
  • Writing from a list of beans
  • Writing from an SQL ResultSet

Writing from an array of strings

CSVWriter follows the same semantics as the CSVReader. For example, to write a tab-separated file:

     CSVWriter writer = new CSVWriter(new FileWriter("yourfile.csv"), '\t');
     // feed in your array (or convert your data to an array)
     String[] entries = "first#second#third".split("#");

If you'd prefer to use your own quote characters, you may use the three argument version of the constructor, which takes a quote character (or feel free to pass in CSVWriter.NO_QUOTE_CHARACTER).

You can also customize the line terminators used in the generated file (which is handy when you're exporting from your Linux web application to Windows clients). There is a constructor argument for this purpose.

Writing from a list of beans

The easiest way to write CSV files will in most cases be StatefulBeanToCsv, which is simplest to create with StatefulBeanToCsvBuilder, and which is thus named because there used to be a BeanToCsv. Thankfully, no more.

     // List<MyBean> beans comes from somewhere earlier in your code.
     Writer writer = new FileWriter("yourfile.csv");
     StatefulBeanToCsvBuilder beanToCsv = StatefulBeanToCsvBuilder(writer).build();

Notice, please, we did not tell opencsv what kind of bean we are writing or what mapping strategy is to be used. opencsv determines these things automatically. Naturally, the mapping strategy can be dictated, if necessary, through StatefulBeanToCsvBuilder.withMappingStrategy(), or the constructor for StatefulBeanToCsv.

Just as in reading into beans, there is a performance trade-off while writing that is left in your hands: ordered vs. unordered data. If the order of the data written to the output and the order of any exceptions captured during processing do not matter to you, use StatefulBeanToCsv.withOrderedResults(false) to obtain slightly better performance.

From a database table

Here's a nifty little trick for those of you out there who often work directly with databases and want to write the results of a query directly to a CSV file. Sean Sullivan added a neat feature to CSVWriter so you can pass writeAll() a ResultSet from an SQL query.

     java.sql.ResultSet myResultSet = . . .
     writer.writeAll(myResultSet, includeHeaders);

Nuts and bolts

Now we start to poke around under the hood of opencsv.

Flow of data through opencsv

We have tried to hide all of the classes and how they work together in opencsv by providing you with builders, since you will rarely need to know all the details of opencsv's internal workings. But for those blessed few, here is how all of the pieces fit together for reading:

  1. You must provide a Reader. This can be any Reader, but a FileReader or StringReader are the most common options.
  2. If you wish, you may provide a parser (anything implementing ICSVParser).
  3. The Reader can be wrapped in a CSVReader, which is also given the parser, if you have used your own. Otherwise, opencsv creates its own parser and even its own CSVReader. If you are reading into an array of strings, this is where the trail ends.
  4. For those reading into beans, a MappingStrategy is the next step.
  5. If you want filtering, you can create a CsvToBeanFilter.
  6. The MappingStrategy and the Reader or CSVReader and optionally the CsvToBeanFilter are passed to a CsvToBean, which uses them to parse input and populate beans.
  7. If you have any custom converters, they are called for each bean field as CsvToBean is populating the bean fields.

For writing it's a little simpler:

  1. You must provide a Writer. This can be any Writer, but a FileWriter or a StringWriter are the most common options.
  2. The Writer is wrapped in a CSVWriter. This is always done for you.
  3. Create a MappingStrategy if you need to. Otherwise opencsv will automatically determine one.
  4. Create a StatefulBeanToCsv, give it the MappingStrategy and the Writer.
  5. If you have any custom converters, they are called for each bean field as the field is written out to the CSV file.

Mapping strategies

opencsv has the concept of a mapping strategy. This is what translates a column from an input file into a field in a bean or vice versa. As we have already implied in the documentation of the annotations, there are two basic mapping strategies: Mapping by header name and mapping by column position. These are incarnated in HeaderColumnNameMappingStrategy and ColumnPositionMappingStrategy respectively. There is one more addendum to the header name mapping strategy: If you need to translate names from the input file to field names and you are not using annotations, you will need to use HeaderColumnNameTranslateMappingStrategy.

If you use annotations and CsvToBeanBuilder (for reading) or StatefulBeanToCsv(Builder) (for writing), an appropriate mapping strategy is automatically determined, and you need worry about nothing else.

Naturally, you can implement your own mapping strategies as you see fit. Your mapping strategy must implement the interface MappingStrategy, but has no other requirement. Feel free to derive a class from the existing implementations for simplicity.

If you have implemented your own mapping strategy, or if you need to override the automatic selection of a mapping strategy, for example if you are reading the same bean with one mapping strategy, but writing it with a different one for conversion purposes, you need to let opencsv know which mapping strategy it must use. For reading, this is accomplished by passing an instance of your mapping strategy to CsvToBeanBuilder.withMappingStrategy(). For writing, pass your strategy to StatefulBeanToCsvBuilder.withMappingStrategy().


Where can I get it?

Source and binaries are available from SourceForge

Can I use opencsv in my commercial applications?

Yes. opencsv is available under a commercial-friendly Apache 2.0 license. You are free to include it in your commericial applications without any fee or charge, and you are free to modify it to suit your circumstances. To find out more details of the license, read the Apache 2.0 license agreement

Can I get the source? More example code?

You can view the source from the opencsv source section. The source section also gives you the URL to the git repository so you can download source code. There is also a sample addressbook CSV reader in the /examples directory. And for extra marks, there's a JUnit test suite in the /test directory.

How can I use it in my Maven projects?

Add a dependency element to your pom:


Who maintains opencsv?

  • opencsv was developed in a couple of hours by Glen Smith. You can read his blog for more info and contact details.
  • opencsv owes much of what it currently is to Scott Conway, whose contributions are too numerous to list, and who is the current (only) maintainer of the project.
  • Sean Sullivan contributed work and was maintainer for a time.
  • Kyle Miller contributed the bean binding work.
  • Tom Squires has expanded on the bean work done by Kyle Miller to add annotations.
  • Andrew Rucker Jones expanded on the annotation work done by Tom Squires and put some extra polish on the documentation.
  • Maciek Opala contributed alot of his time modernizing opencsv. He moved the repository to git and fixed several issues.
  • J.C. Romanda contributed several fixes.

Reporting issues

You can report issues on the support page at Sourceforge. Please post a sample file that demonstrates your issue. For bonus marks, post a patch too. :-)