import ibis
= ibis.connect("duckdb://local.ddb") con
DuckDB
Install
Install Ibis and dependencies for the DuckDB backend:
Install with the duckdb
extra:
pip install 'ibis-framework[duckdb]'
And connect:
import ibis
= ibis.duckdb.connect() con
- 1
- Adjust connection parameters as needed.
Install for DuckDB:
conda install -c conda-forge ibis-duckdb
And connect:
import ibis
= ibis.duckdb.connect() con
- 1
- Adjust connection parameters as needed.
Install for DuckDB:
mamba install -c conda-forge ibis-duckdb
And connect:
import ibis
= ibis.duckdb.connect() con
- 1
- Adjust connection parameters as needed.
Connect
ibis.duckdb.connect
Connect to an in-memory database:
= ibis.duckdb.connect() con
Connect to, or create, a local DuckDB file
= ibis.duckdb.connect("mydb.duckdb") con
ibis.duckdb.connect
is a thin wrapper around ibis.backends.duckdb.Backend.do_connect
.
Connection Parameters
do_connect
do_connect(self, database=':memory:', read_only=False, temp_directory=None, extensions=None, **config)
Create an Ibis client connected to a DuckDB database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
database |
str | pathlib.Path | Path to a duckdb database. | ':memory:' |
read_only |
bool | Whether the database is read-only. | False |
temp_directory |
str | pathlib.Path | None | Directory to use for spilling to disk. Only set by default for in-memory connections. | None |
extensions |
collections.abc.Sequence[str] | None | A list of duckdb extensions to install/load upon connection. | None |
config |
typing.Any | DuckDB configuration parameters. See the DuckDB configuration documentation for possible configuration values. | {} |
Examples
>>> import ibis
>>> ibis.duckdb.connect("database.ddb", threads=4, memory_limit="1GB")
<ibis.backends.duckdb.Backend object at ...>
ibis.connect
URL format
In addition to ibis.duckdb.connect
, you can also connect to DuckDB by passing a properly-formatted DuckDB connection URL to ibis.connect
:
Given an empty path, ibis.connect
will connect to an ephemeral, in-memory database.
= ibis.connect("duckdb://") con
MotherDuck
The DuckDB backend supports MotherDuck. If you have an account, you can connect to MotherDuck by passing in the string md:
or motherduck:
. ibis
will trigger the authentication prompt in-browser.
import ibis
= ibis.duckdb.connect("md:") con
Authentication to MotherDuck will trigger on the first call that requires retrieving information (in this case list_tables
)
con.list_tables()open the SSO authorization page in your default browser.
Attempting to automatically 1. Please open this link to login into your account: https://auth.motherduck.com/activate
2. Enter the following code: ZSRQ-GJQS
Token successfully retrieved ✅as an environment variable to avoid having to log in again:
You can store it ='****************'
$ export motherduck_token
'penguins'] [
duckdb.Backend
add_operation
add_operation(self, operation)
Add a translation function to the backend for a specific operation.
Operations are defined in ibis.expr.operations
, and a translation function receives the translator object and an expression as parameters, and returns a value depending on the backend.
attach
attach(self, path, name=None, read_only=False)
Attach another DuckDB database to the current DuckDB session.
Parameters
Name | Type | Description | Default |
---|---|---|---|
path |
str | pathlib.Path | Path to the database to attach. | required |
name |
str | None | Name to attach the database as. Defaults to the basename of path . |
None |
read_only |
bool | Whether to attach the database as read-only. | False |
attach_sqlite
attach_sqlite(self, path, overwrite=False, all_varchar=False)
Attach a SQLite database to the current DuckDB session.
Parameters
Name | Type | Description | Default |
---|---|---|---|
path |
str | pathlib.Path | The path to the SQLite database. | required |
overwrite |
bool | Allow overwriting any tables or views that already exist in your current session with the contents of the SQLite database. | False |
all_varchar |
bool | Set all SQLite columns to type VARCHAR to avoid type errors on ingestion. |
False |
Examples
>>> import ibis
>>> import sqlite3
>>> with sqlite3.connect("/tmp/attach_sqlite.db") as con:
"DROP TABLE IF EXISTS t")
... con.execute("CREATE TABLE t (a INT, b TEXT)")
... con.execute(
... con.execute("INSERT INTO t VALUES (1, 'a'), (2, 'b'), (3, 'c')"
...
... )<...>
>>> con = ibis.connect("duckdb://")
>>> con.list_tables()
[]>>> con.attach_sqlite("/tmp/attach_sqlite.db")
>>> con.list_tables()
't'] [
begin
begin(self)
compile
compile(self, expr, limit=None, params=None, timecontext=None)
Compile an Ibis expression.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression | required |
limit |
str | None | For expressions yielding result sets; retrieve at most this number of values/rows. Overrides any limit already set on the expression. | None |
params |
collections.abc.Mapping[ibis.ibis.Expr, typing.Any] | None | Named unbound parameters | None |
timecontext |
tuple[pandas.pandas.Timestamp, pandas.pandas.Timestamp] | None | Additional information about data source time boundaries | None |
Returns
Type | Description |
---|---|
typing.Any | The output of compilation. The type of this value depends on the backend. |
connect
connect(self, *args, **kwargs)
Connect to the database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
*args |
Mandatory connection parameters, see the docstring of do_connect for details. |
() |
|
**kwargs |
Extra connection parameters, see the docstring of do_connect for details. |
{} |
Notes
This creates a new backend instance with saved args
and kwargs
, then calls reconnect
and finally returns the newly created and connected backend instance.
Returns
Type | Description |
---|---|
ibis.backends.base.BaseBackend | An instance of the backend |
create_schema
create_schema(self, name, database=None, force=False)
Create a schema named name
in database
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | Name of the schema to create. | required |
database |
str | None | Name of the database in which to create the schema. If None , the current database is used. |
None |
force |
bool | If False , an exception is raised if the schema exists. |
False |
create_table
create_table(self, name, obj=None, *, schema=None, database=None, temp=False, overwrite=False)
Create a table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | Name of the new table. | required |
obj |
pandas.pandas.DataFrame | pyarrow.pyarrow.Table | ibis.ibis.Table | None | An Ibis table expression or pandas table that will be used to extract the schema and the data of the new table. If not provided, schema must be given. |
None |
schema |
ibis.ibis.Schema | None | The schema for the new table. Only one of schema or obj can be provided. |
None |
database |
str | None | Name of the database where the table will be created, if not the default. | None |
temp |
bool | Should the table be temporary for the session. | False |
overwrite |
bool | Clobber existing data | False |
Returns
Type | Description |
---|---|
Table | The table that was created. |
create_view
create_view(self, name, obj, *, database=None, overwrite=False)
database
database(self, name=None)
Return a Database
object for the name
database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | None | Name of the database to return the object for. | None |
Returns
Type | Description |
---|---|
ibis.backends.base.Database | A database object for the specified database. |
detach
detach(self, name)
Detach a database from the current DuckDB session.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | The name of the database to detach. | required |
drop_schema
drop_schema(self, name, database=None, force=False)
Drop the schema with name
in database
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | Name of the schema to drop. | required |
database |
str | None | Name of the database to drop the schema from. If None , the current database is used. |
None |
force |
bool | If False , an exception is raised if the schema does not exist. |
False |
drop_table
drop_table(self, name, database=None, force=False)
drop_view
drop_view(self, name, *, database=None, force=False)
execute
execute(self, expr, params=None, limit='default', **kwargs)
Compile and execute an Ibis expression.
Compile and execute Ibis expression using this backend client interface, returning results in-memory in the appropriate object type
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression | required |
limit |
str | For expressions yielding result sets; retrieve at most this number of values/rows. Overrides any limit already set on the expression. | 'default' |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Named unbound parameters | None |
kwargs |
typing.Any | Backend specific arguments. For example, the clickhouse backend uses this to receive external_tables as a dictionary of pandas DataFrames. |
{} |
Returns
Type | Description |
---|---|
DataFrame | Series | Scalar | * Table : pandas.DataFrame * Column : pandas.Series * Scalar : Python scalar value |
fetch_from_cursor
fetch_from_cursor(self, cursor, schema)
has_operation
has_operation(cls, operation)
insert
insert(self, table_name, obj, database=None, overwrite=False)
Insert data into a table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
table_name |
str | The name of the table to which data needs will be inserted | required |
obj |
pandas.pandas.DataFrame | ibis.ibis.Table | list | dict | The source data or expression to insert | required |
database |
str | None | Name of the attached database that the table is located in. | None |
overwrite |
bool | If True then replace existing contents of table |
False |
Raises
Type | Description |
---|---|
NotImplementedError | If inserting data from a different database |
ValueError | If the type of obj isn’t supported |
list_databases
list_databases(self, like=None)
list_schemas
list_schemas(self, like=None, database=None)
List existing schemas in the current connection.
Parameters
Name | Type | Description | Default |
---|---|---|---|
like |
str | None | A pattern in Python’s regex format to filter returned schema names. | None |
database |
str | None | The database to list schemas from. If None , the current database is searched. |
None |
Returns
Type | Description |
---|---|
list[str] | The schema names that exist in the current connection, that match the like pattern if provided. |
list_tables
list_tables(self, like=None, database=None, schema=None)
List tables and views.
Parameters
Name | Type | Description | Default |
---|---|---|---|
like |
str | None | Regex to filter by table/view name. | None |
database |
str | None | Database name. If not passed, uses the current database. Only supported with MotherDuck. | None |
schema |
str | None | Schema name. If not passed, uses the current schema. | None |
Returns
Type | Description |
---|---|
list[str] | List of table and view names. |
Examples
>>> import ibis
>>> con = ibis.duckdb.connect()
>>> foo = con.create_table("foo", schema=ibis.schema(dict(a="int")))
>>> con.list_tables()
'foo']
[>>> bar = con.create_view("bar", foo)
>>> con.list_tables()
'bar', 'foo']
[>>> con.create_schema("my_schema")
>>> con.list_tables(schema="my_schema")
[]>>> with con.begin() as c:
"CREATE TABLE my_schema.baz (a INTEGER)")
... c.exec_driver_sql(<...>
>>> con.list_tables(schema="my_schema")
'baz'] [
load_extension
load_extension(self, extension, force_install=False)
Install and load a duckdb extension by name or path.
Parameters
Name | Type | Description | Default |
---|---|---|---|
extension |
str | The extension name or path. | required |
force_install |
bool | Force reinstallation of the extension. | False |
raw_sql
raw_sql(self, query)
Execute a query and return the cursor used for execution.
.sql
instead
If your query is a SELECT
statement you can use the backend .sql
method to avoid having to manually release the cursor returned from this method.
You do not need to call .close()
on the cursor when running DDL or DML statements like CREATE
, INSERT
or DROP
, only when using SELECT
statements.
To release a cursor, call the close
method on the returned cursor object.
You can close the cursor by explicitly calling its close
method:
= con.raw_sql("SELECT ...")
cursor cursor.close()
Or you can use a context manager:
with con.raw_sql("SELECT ...") as cursor:
...
Parameters
Name | Type | Description | Default |
---|---|---|---|
query |
str | sqlalchemy.sqlalchemy.sql.sqlalchemy.sql.ClauseElement | SQL query or SQLAlchemy expression to execute | required |
Examples
>>> con = ibis.connect("duckdb://")
>>> with con.raw_sql("SELECT 1") as cursor:
= cursor.fetchall()
... result >>> result
1,)]
[(>>> cursor.closed
True
read_csv
read_csv(self, source_list, table_name=None, **kwargs)
Register a CSV file as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source_list |
str | list[str] | tuple[str] | The data source(s). May be a path to a file or directory of CSV files, or an iterable of CSV files. | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
**kwargs |
typing.Any | Additional keyword arguments passed to DuckDB loading function. See https://duckdb.org/docs/data/csv for more information. | {} |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table |
read_delta
read_delta(self, source_table, table_name=None, **kwargs)
Register a Delta Lake table as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source_table |
str | The data source. Must be a directory containing a Delta Lake table. | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
**kwargs |
typing.Any | Additional keyword arguments passed to deltalake.DeltaTable. | {} |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table. |
read_geo
read_geo(self, source, table_name=None, **kwargs)
Register a GEO file as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source |
str | The data source(s). Path to a file of geospatial files supported by duckdb. See https://duckdb.org/docs/extensions/spatial.html#st_read—read-spatial-data-from-files | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
**kwargs |
typing.Any | Additional keyword arguments passed to DuckDB loading function. See https://duckdb.org/docs/extensions/spatial.html#st_read—read-spatial-data-from-files for more information. | {} |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table |
read_in_memory
read_in_memory(self, source, table_name=None)
Register a Pandas DataFrame or pyarrow object as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source |
pandas.pandas.DataFrame | pyarrow.pyarrow.Table | pyarrow.pyarrow.RecordBatchReader | The data source. | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table |
read_json
read_json(self, source_list, table_name=None, **kwargs)
Read newline-delimited JSON into an ibis table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source_list |
str | list[str] | tuple[str] | File or list of files | required |
table_name |
str | None | Optional table name | None |
**kwargs |
Additional keyword arguments passed to DuckDB’s read_json_auto function |
{} |
Returns
Type | Description |
---|---|
Table | An ibis table expression |
read_parquet
read_parquet(self, source_list, table_name=None, **kwargs)
Register a parquet file as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source_list |
str | collections.abc.Iterable[str] | The data source(s). May be a path to a file, an iterable of files, or directory of parquet files. | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
**kwargs |
typing.Any | Additional keyword arguments passed to DuckDB loading function. See https://duckdb.org/docs/data/parquet for more information. | {} |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table |
read_postgres
read_postgres(self, uri, table_name=None, schema='public')
Register a table from a postgres instance into a DuckDB table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
uri |
str | A postgres URI of the form postgres://user:password@host:port |
required |
table_name |
str | None | The table to read | None |
schema |
str | PostgreSQL schema where table_name resides |
'public' |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table. |
read_sqlite
read_sqlite(self, path, table_name=None)
Register a table from a SQLite database into a DuckDB table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
path |
str | pathlib.Path | The path to the SQLite database | required |
table_name |
str | None | The table to read | None |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table. |
Examples
>>> import ibis
>>> import sqlite3
>>> ibis.options.interactive = True
>>> with sqlite3.connect("/tmp/sqlite.db") as con:
"DROP TABLE IF EXISTS t")
... con.execute("CREATE TABLE t (a INT, b TEXT)")
... con.execute(
... con.execute("INSERT INTO t VALUES (1, 'a'), (2, 'b'), (3, 'c')"
...
... )<...>
>>> con = ibis.connect("duckdb://")
>>> t = con.read_sqlite("/tmp/sqlite.db", table_name="t")
>>> t
┏━━━━━━━┳━━━━━━━━┓
┃ a ┃ b ┃
┡━━━━━━━╇━━━━━━━━┩
│ int64 │ string │
├───────┼────────┤1 │ a │
│ 2 │ b │
│ 3 │ c │
│ └───────┴────────┘
reconnect
reconnect(self)
Reconnect to the database already configured with connect.
register
register(self, source, table_name=None, **kwargs)
Register a data source as a table in the current database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
source |
str | pathlib.Path | typing.Any | The data source(s). May be a path to a file or directory of parquet/csv files, an iterable of parquet or CSV files, a pandas dataframe, a pyarrow table or dataset, or a postgres URI. | required |
table_name |
str | None | An optional name to use for the created table. This defaults to a sequentially generated name. | None |
**kwargs |
typing.Any | Additional keyword arguments passed to DuckDB loading functions for CSV or parquet. See https://duckdb.org/docs/data/csv and https://duckdb.org/docs/data/parquet for more information. | {} |
Returns
Type | Description |
---|---|
ibis.ibis.Table | The just-registered table |
register_filesystem
register_filesystem(self, filesystem)
Register an fsspec
filesystem object with DuckDB.
This allow a user to read from any fsspec
compatible filesystem using read_csv
, read_parquet
, read_json
, etc.
Creating an fsspec
filesystem requires that the corresponding backend-specific fsspec
helper library is installed.
e.g. to connect to Google Cloud Storage, gcsfs
must be installed.
Parameters
Name | Type | Description | Default |
---|---|---|---|
filesystem |
fsspec.AbstractFileSystem | The fsspec filesystem object to register with DuckDB. See https://duckdb.org/docs/guides/python/filesystems for details. | required |
Examples
>>> import ibis
>>> import fsspec
>>> gcs = fsspec.filesystem("gcs")
>>> con = ibis.duckdb.connect()
>>> con.register_filesystem(gcs)
>>> t = con.read_csv(
"gcs://ibis-examples/data/band_members.csv.gz",
... ="band_members",
... table_name
... )
DatabaseTable: band_members
name string band string
register_options
register_options(cls)
Register custom backend options.
rename_table
rename_table(self, old_name, new_name)
Rename an existing table.
Parameters
Name | Type | Description | Default |
---|---|---|---|
old_name |
str | The old name of the table. | required |
new_name |
str | The new name of the table. | required |
schema
schema(self, name)
Get an ibis schema from the current database for the table name
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | Table name | required |
Returns
Type | Description |
---|---|
Schema | The ibis schema of name |
sql
sql(self, query, schema=None, dialect=None)
Convert a SQL query to an Ibis table expression.
Parameters
Name | Type | Description | Default |
---|---|---|---|
query |
str | SQL string | required |
schema |
ibis.ibis.Schema | None | The expected schema for this query. If not provided, will be inferred automatically if possible. | None |
dialect |
str | None | Optional string indicating the dialect of query . The default value of None will use the backend’s native dialect. |
None |
Returns
Type | Description |
---|---|
Table | Table expression |
table
table(self, name, database=None, schema=None)
Create a table expression from a table in the database.
Parameters
Name | Type | Description | Default |
---|---|---|---|
name |
str | Table name | required |
database |
str | None | The database the table resides in | None |
schema |
str | None | The schema inside database where the table resides. ::: {.callout-warning} ## schema refers to database hierarchy The schema parameter does not refer to the column names and types of table . ::: |
None |
Returns
Type | Description |
---|---|
Table | Table expression |
to_csv
to_csv(self, expr, path, *, params=None, header=True, **kwargs)
Write the results of executing the given expression to a CSV file.
This method is eager and will execute the associated expression immediately.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Table | The ibis expression to execute and persist to CSV. | required |
path |
str | pathlib.Path | The data source. A string or Path to the CSV file. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
header |
bool | Whether to write the column names as the first line of the CSV file. | True |
**kwargs |
typing.Any | DuckDB CSV writer arguments. https://duckdb.org/docs/data/csv.html#parameters | {} |
to_delta
to_delta(self, expr, path, *, params=None, **kwargs)
Write the results of executing the given expression to a Delta Lake table.
This method is eager and will execute the associated expression immediately.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Table | The ibis expression to execute and persist to Delta Lake table. | required |
path |
str | pathlib.Path | The data source. A string or Path to the Delta Lake table. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
kwargs |
typing.Any | Additional keyword arguments passed to deltalake.writer.write_deltalake method | {} |
to_pandas
to_pandas(self, expr, *, params=None, limit=None, **kwargs)
Execute an Ibis expression and return a pandas DataFrame
, Series
, or scalar.
This method is a wrapper around execute
.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression to execute. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
limit |
int | str | None | An integer to effect a specific row limit. A value of None means “no limit”. The default is in ibis/config.py . |
None |
kwargs |
typing.Any | Keyword arguments | {} |
to_pandas_batches
to_pandas_batches(self, expr, *, params=None, limit=None, chunk_size=1000000, **kwargs)
Execute an Ibis expression and return an iterator of pandas DataFrame
s.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression to execute. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
limit |
int | str | None | An integer to effect a specific row limit. A value of None means “no limit”. The default is in ibis/config.py . |
None |
chunk_size |
int | Maximum number of rows in each returned DataFrame batch. This may have no effect depending on the backend. |
1000000 |
kwargs |
typing.Any | Keyword arguments | {} |
Returns
Type | Description |
---|---|
collections.abc.Iterator[pandas.pandas.DataFrame] | An iterator of pandas DataFrame s. |
to_parquet
to_parquet(self, expr, path, *, params=None, **kwargs)
Write the results of executing the given expression to a parquet file.
This method is eager and will execute the associated expression immediately.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Table | The ibis expression to execute and persist to parquet. | required |
path |
str | pathlib.Path | The data source. A string or Path to the parquet file. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
**kwargs |
typing.Any | DuckDB Parquet writer arguments. See https://duckdb.org/docs/data/parquet#writing-to-parquet-files for details | {} |
Examples
Write out an expression to a single parquet file.
>>> import ibis
>>> penguins = ibis.examples.penguins.fetch()
>>> con = ibis.get_backend(penguins)
>>> con.to_parquet(penguins, "/tmp/penguins.parquet")
Write out an expression to a hive-partitioned parquet file.
>>> import tempfile
>>> penguins = ibis.examples.penguins.fetch()
>>> con = ibis.get_backend(penguins)
Partition on a single column.
>>> con.to_parquet(penguins, tempfile.mkdtemp(), partition_by="year")
Partition on multiple columns.
>>> con.to_parquet(penguins, tempfile.mkdtemp(), partition_by=("year", "island"))
to_pyarrow
to_pyarrow(self, expr, *, params=None, limit=None, **_)
Execute expression and return results in as a pyarrow table.
This method is eager and will execute the associated expression immediately.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression to export to pyarrow | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Mapping of scalar parameter expressions to value. | None |
limit |
int | str | None | An integer to effect a specific row limit. A value of None means “no limit”. The default is in ibis/config.py . |
None |
kwargs |
typing.Any | Keyword arguments | {} |
Returns
Type | Description |
---|---|
Table | A pyarrow table holding the results of the executed expression. |
to_pyarrow_batches
to_pyarrow_batches(self, expr, *, params=None, limit=None, chunk_size=1000000, **_)
Return a stream of record batches.
The returned RecordBatchReader
contains a cursor with an unbounded lifetime.
For analytics use cases this is usually nothing to fret about. In some cases you may need to explicit release the cursor.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Bound parameters | None |
limit |
int | str | None | Limit the result to this number of rows | None |
chunk_size |
int | ::: {.callout-warning} ## DuckDB returns 1024 size batches regardless of what argument is passed. ::: | 1000000 |
to_torch
to_torch(self, expr, *, params=None, limit=None, **kwargs)
Execute an expression and return results as a dictionary of torch tensors.
Parameters
Name | Type | Description | Default |
---|---|---|---|
expr |
ibis.ibis.Expr | Ibis expression to execute. | required |
params |
collections.abc.Mapping[ibis.ibis.Scalar, typing.Any] | None | Parameters to substitute into the expression. | None |
limit |
int | str | None | An integer to effect a specific row limit. A value of None means no limit. |
None |
kwargs |
typing.Any | Keyword arguments passed into the backend’s to_torch implementation. |
{} |
Returns
Type | Description |
---|---|
dict[str, torch.torch.Tensor] | A dictionary of torch tensors, keyed by column name. |
truncate_table
truncate_table(self, name, database=None)