Exporting models
As well as accessing model attributes directly via their names (e.g. model.foobar
), models can be converted
and exported in a number of ways:
model.dict(...)
🔗
This is the primary way of converting a model to a dictionary. Sub-models will be recursively converted to dictionaries.
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowby_alias
: whether field aliases should be used as keys in the returned dictionary; defaultFalse
exclude_unset
: whether fields which were not explicitly set when creating the model should be excluded from the returned dictionary; defaultFalse
. Prior to v1.0,exclude_unset
was known asskip_defaults
; use ofskip_defaults
is now deprecatedexclude_defaults
: whether fields which are equal to their default values (whether set or otherwise) should be excluded from the returned dictionary; defaultFalse
Example:
from pydantic import BaseModel class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): banana: float foo: str bar: BarModel m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123}) # returns a dictionary: print(m.dict()) """ { 'banana': 3.14, 'foo': 'hello', 'bar': {'whatever': 123}, } """ print(m.dict(include={'foo', 'bar'})) #> {'foo': 'hello', 'bar': {'whatever': 123}} print(m.dict(exclude={'foo', 'bar'})) #> {'banana': 3.14}
(This script is complete, it should run "as is")
dict(model)
and iteration🔗
pydantic models can also be converted to dictionaries using dict(model)
, and you can also
iterate over a model's field using for field_name, value in model:
. With this approach the raw field values are
returned, so sub-models will not be converted to dictionaries.
Example:
from pydantic import BaseModel class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): banana: float foo: str bar: BarModel m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123}) print(dict(m)) """ { 'banana': 3.14, 'foo': 'hello', 'bar': BarModel( whatever=123, ), } """ for name, value in m: print(f'{name}: {value}') #> banana: 3.14 #> foo: hello #> bar: whatever=123
(This script is complete, it should run "as is")
model.copy(...)
🔗
copy()
allows models to be duplicated, which is particularly useful for immutable models.
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowupdate
: a dictionary of values to change when creating the copied modeldeep
: whether to make a deep copy of the new model; defaultFalse
Example:
from pydantic import BaseModel class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): banana: float foo: str bar: BarModel m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123}) print(m.copy(include={'foo', 'bar'})) #> foo='hello' bar=BarModel(whatever=123) print(m.copy(exclude={'foo', 'bar'})) #> banana=3.14 print(m.copy(update={'banana': 0})) #> banana=0 foo='hello' bar=BarModel(whatever=123) print(id(m.bar), id(m.copy().bar)) #> 140689091421064 140689091421064 # normal copy gives the same object reference for `bar` print(id(m.bar), id(m.copy(deep=True).bar)) #> 140689091421064 140689091422216 # deep copy gives a new object reference for `bar`
(This script is complete, it should run "as is")
model.json(...)
🔗
The .json()
method will serialise a model to JSON. Typically, .json()
in turn calls .dict()
and
serialises its result. (For models with a custom root type, after calling .dict()
,
only the value for the __root__
key is serialised.)
Serialisation can be customised on a model using the json_encoders
config property; the keys should be types, and
the values should be functions which serialise that type (see the example below).
Arguments:
include
: fields to include in the returned dictionary; see belowexclude
: fields to exclude from the returned dictionary; see belowby_alias
: whether field aliases should be used as keys in the returned dictionary; defaultFalse
exclude_unset
: whether fields which were not set when creating the model and have their default values should be excluded from the returned dictionary; defaultFalse
. Prior to v1.0,exclude_unset
was known asskip_defaults
; use ofskip_defaults
is now deprecatedexclude_defaults
: whether fields which are equal to their default values (whether set or otherwise) should be excluded from the returned dictionary; defaultFalse
encoder
: a custom encoder function passed to thedefault
argument ofjson.dumps()
; defaults to a custom encoder designed to take care of all common types**dumps_kwargs
: any other keyword arguments are passed tojson.dumps()
, e.g.indent
.
Example:
from datetime import datetime, timedelta from pydantic import BaseModel from pydantic.json import timedelta_isoformat class BarModel(BaseModel): whatever: int class FooBarModel(BaseModel): foo: datetime bar: BarModel m = FooBarModel(foo=datetime(2032, 6, 1, 12, 13, 14), bar={'whatever': 123}) print(m.json()) #> {"foo": "2032-06-01T12:13:14", "bar": {"whatever": 123}} # (returns a str) class WithCustomEncoders(BaseModel): dt: datetime diff: timedelta class Config: json_encoders = { datetime: lambda v: v.timestamp(), timedelta: timedelta_isoformat, } m = WithCustomEncoders(dt=datetime(2032, 6, 1), diff=timedelta(hours=100)) print(m.json()) #> {"dt": 1969660800.0, "diff": "P4DT4H0M0.000000S"}
(This script is complete, it should run "as is")
By default, timedelta
is encoded as a simple float of total seconds. The timedelta_isoformat
is provided
as an optional alternative which implements ISO 8601 time diff encoding.
See below for details on how to use other libraries for more performant JSON encoding and decoding.
pickle.dumps(model)
🔗
Using the same plumbing as copy()
, pydantic models support efficient pickling and unpickling.
import pickle from pydantic import BaseModel class FooBarModel(BaseModel): a: str b: int m = FooBarModel(a='hello', b=123) print(m) #> a='hello' b=123 data = pickle.dumps(m) print(data) """ b'\x80\x03cexporting_models_pickle\nFooBarModel\nq\x00)\x81q\x01}q\x02(X\x08\ x00\x00\x00__dict__q\x03}q\x04(X\x01\x00\x00\x00aq\x05X\x05\x00\x00\x00helloq \x06X\x01\x00\x00\x00bq\x07K{uX\x0e\x00\x00\x00__fields_set__q\x08cbuiltins\n set\nq\t]q\n(h\x05h\x07e\x85q\x0bRq\x0cub.' """ m2 = pickle.loads(data) print(m2) #> a='hello' b=123
(This script is complete, it should run "as is")
Advanced include and exclude🔗
The dict
, json
, and copy
methods support include
and exclude
arguments which can either be
sets or dictionaries. This allows nested selection of which fields to export:
from pydantic import BaseModel, SecretStr class User(BaseModel): id: int username: str password: SecretStr class Transaction(BaseModel): id: str user: User value: int t = Transaction( id="1234567890", user=User( id=42, username="JohnDoe", password="hashedpassword" ), value=9876543210 ) # using a set: print(t.dict(exclude={'user', 'value'})) #> {'id': '1234567890'} # using a dict: print(t.dict(exclude={'user': {'username', 'password'}, 'value': ...})) #> {'id': '1234567890', 'user': {'id': 42}} print(t.dict(include={'id': ..., 'user': {'id'}})) #> {'id': '1234567890', 'user': {'id': 42}}
The ellipsis (...
) indicates that we want to exclude or include an entire key, just as if we included it in a set.
Of course, the same can be done at any depth level:
import datetime from typing import List from pydantic import BaseModel, SecretStr class Country(BaseModel): name: str phone_code: int class Address(BaseModel): post_code: int country: Country class CardDetails(BaseModel): number: SecretStr expires: datetime.date class Hobby(BaseModel): name: str info: str class User(BaseModel): first_name: str second_name: str address: Address card_details: CardDetails hobbies: List[Hobby] user = User( first_name='John', second_name='Doe', address=Address( post_code=123456, country=Country( name='USA', phone_code=1 ) ), card_details=CardDetails( number=4212934504460000, expires=datetime.date(2020, 5, 1) ), hobbies=[ Hobby(name='Programming', info='Writing code and stuff'), Hobby(name='Gaming', info='Hell Yeah!!!') ] ) exclude_keys = { 'second_name': ..., 'address': {'post_code': ..., 'country': {'phone_code'}}, 'card_details': ..., # You can exclude values from tuples and lists by indexes 'hobbies': {-1: {'info'}}, } include_keys = { 'first_name': ..., 'address': {'country': {'name'}}, 'hobbies': {0: ..., -1: {'name'}} } # would be the same as user.dict(exclude=exclude_keys) in this case: print(user.dict(include=include_keys)) """ { 'first_name': 'John', 'address': {'country': {'name': 'USA'}}, 'hobbies': [ { 'name': 'Programming', 'info': 'Writing code and stuff', }, {'name': 'Gaming'}, ], } """
The same holds for the json
and copy
methods.
Custom JSON (de)serialisation🔗
To improve the performance of encoding and decoding JSON, alternative JSON implementations
(e.g. ujson) can be used via the
json_loads
and json_dumps
properties of Config
.
from datetime import datetime import ujson from pydantic import BaseModel class User(BaseModel): id: int name = 'John Doe' signup_ts: datetime = None class Config: json_loads = ujson.loads user = User.parse_raw('{"id": 123,"signup_ts":1234567890,"name":"John Doe"}') print(user) #> id=123 signup_ts=datetime.datetime(2009, 2, 13, 23, 31, 30, #> tzinfo=datetime.timezone.utc) name='John Doe'
(This script is complete, it should run "as is")
ujson
generally cannot be used to dump JSON since it doesn't support encoding of objects like datetimes and does
not accept a default
fallback function argument. To do this, you may use another library like
orjson.
from datetime import datetime import orjson from pydantic import BaseModel def orjson_dumps(v, *, default): # orjson.dumps returns bytes, to match standard json.dumps we need to decode return orjson.dumps(v, default=default).decode() class User(BaseModel): id: int name = 'John Doe' signup_ts: datetime = None class Config: json_loads = orjson.loads json_dumps = orjson_dumps user = User.parse_raw('{"id":123,"signup_ts":1234567890,"name":"John Doe"}') print(user.json()) #> {"id":123,"signup_ts":"2009-02-13T23:31:30+00:00","name":"John Doe"}
(This script is complete, it should run "as is")
Note that orjson
takes care of datetime
encoding natively, making it faster than json.dumps
but
meaning you cannot always customise the encoding using Config.json_encoders
.