Timeseries Store Usage#

Table of Contents#

Creating a series#

Here’s a simple example:

>>> import pandas as pd
>>> from tshistory.api import timeseries
>>>
>>> tsa = timeseries('postgresql://me:password@localhost/mydb')
>>>
>>> series = pd.Series([1, 2, 3],
...                    pd.date_range(start=pd.Timestamp(2017, 1, 1),
...                                  freq='D', periods=3))
# db insertion
>>> tsa.update('my_series', series, 'babar@pythonian.fr')
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Freq: D, Name: my_series, dtype: float64

# note how our integers got turned into floats
# (there are no provisions to handle integer series as of today)

# retrieval
>>> tsa.get('my_series')
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64

Note that we generally adopt the convention to name the time series api object tsa.

Updating a series#

The update method is the fundamental operation for time series management, designed for incrementally updating series as new data arrives over time.

This is good. Now, let’s insert more:

>>> series = pd.Series([2, 7, 8, 9],
...                    pd.date_range(start=pd.Timestamp(2017, 1, 2),
...                                  freq='D', periods=4))
# db insertion
>>> tsa.update('my_series', series, 'babar@pythonian.fr')
...
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

You get back the new information you put inside and this is why the 2 doesn’t appear (it was already put there in the first step).

>>> tsa.get('my_series')
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

It is important to note that the third value was replaced, and the two last values were just appended. As noted the point at 2017-1-2 wasn’t a new information so it was just ignored.

Working with versions#

The insertion_dates method is one of the three fundamental API points (along with get and update). Every update creates a new version of the series, and this method returns when each version was created:

>>> tsa.insertion_dates('my_series')
[pd.Timestamp('2018-09-26 17:10:36.988920+02:00'),
 pd.Timestamp('2018-09-26 17:12:54.508252+02:00')]

>>> # get insertions within a date range
>>> tsa.insertion_dates('my_series',
...                     from_insertion_date=pd.Timestamp('2018-09-26 17:11:00+02:00'))
[pd.Timestamp('2018-09-26 17:12:54.508252+02:00')]

These timestamps identify the versions of your series and are what you use with get to retrieve any past state.

Point and version erasure#

Point erasure with NaN#

You can erase specific points in a series by updating with NaN values:

>>> # erase the point at 2017-01-02
>>> erasure = pd.Series([np.nan], index=[pd.Timestamp('2017-01-02')])
>>> tsa.update('my_series', erasure, 'cleanup@example.com', keepnans=True)

>>> # by default, erased points are not shown
>>> tsa.get('my_series')
2017-01-01    1.0
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

>>> # use keepnans=True to see erased points
>>> tsa.get('my_series', keepnans=True)
2017-01-01    1.0
2017-01-02    NaN
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

Version erasure with strip#

WARNING: This is a DESTRUCTIVE operation that should only be used as a LAST RESORT.

The strip method permanently removes all versions after a given insertion date:

>>> # check existing versions
>>> tsa.insertion_dates('my_series')
[pd.Timestamp('2018-09-26 17:10:36.988920+02:00'),
 pd.Timestamp('2018-09-26 17:12:54.508252+02:00'),
 pd.Timestamp('2018-09-26 17:15:00.000000+02:00')]

>>> # DANGER: permanently remove versions after 17:12
>>> tsa.strip('my_series', pd.Timestamp('2018-09-26 17:12:00+02:00'))

>>> # versions are gone forever
>>> tsa.insertion_dates('my_series')
[pd.Timestamp('2018-09-26 17:10:36.988920+02:00')]

This operation cannot be undone. Use with extreme caution.

Retrieving history#

We can access the whole history (or parts of it) in one call:

>>> history = tsa.history('my_series')
...
>>>
>>> for idate, series in history.items(): # it's a dict
...     print('insertion date:', idate)
...     print(series)
...
insertion date: 2018-09-26 17:10:36.988920+02:00
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64
insertion date: 2018-09-26 17:12:54.508252+02:00
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

Note how this shows the full serie state for each insertion date. Also the insertion date is timzeone aware.

Specific versions of a series can be retrieved individually using the get method with the revision_date parameter (using timestamps obtained from insertion_dates):

>>> tsa.get('my_series', revision_date=pd.Timestamp('2018-09-26 17:11+02:00'))
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64
>>>
>>> tsa.get('my_series', revision_date=pd.Timestamp('2018-09-26 17:14+02:00'))
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

It is possible to retrieve only the differences between successive insertions:

>>> diffs = tsa.history('my_series', diffmode=True)
...
>>> for idate, series in diffs.items():
...   print('insertion date:', idate)
...   print(series)
...
insertion date: 2018-09-26 17:10:36.988920+02:00
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64
insertion date: 2018-09-26 17:12:54.508252+02:00
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64

Working with metadata#

Series can have metadata attached to help document and organize them:

>>> tsa.update_metadata('temperature_sensor', {
...     'unit': 'celsius',
...     'location': 'building_a',
...     'sensor_type': 'PT100',
...     'frequency': 'hourly'
... })

>>> tsa.metadata('temperature_sensor')
{'unit': 'celsius', 'location': 'building_a', 'sensor_type': 'PT100', 'frequency': 'hourly'}

>>> # update metadata (merges with existing)
>>> tsa.update_metadata('temperature_sensor', {'calibrated': '2023-01-15'})
>>> tsa.metadata('temperature_sensor')
{'unit': 'celsius', 'location': 'building_a', 'sensor_type': 'PT100',
 'frequency': 'hourly', 'calibrated': '2023-01-15'}

>>> # replace all metadata
>>> tsa.replace_metadata('temperature_sensor', {'unit': 'fahrenheit', 'status': 'active'})
>>> tsa.metadata('temperature_sensor')
{'unit': 'fahrenheit', 'status': 'active'}

>>> # view metadata history
>>> tsa.old_metadata('temperature_sensor')
[(pd.Timestamp('2023-01-01 10:00:00+00:00'),
  {'unit': 'celsius', 'location': 'building_a', 'sensor_type': 'PT100', 'frequency': 'hourly'}),
 (pd.Timestamp('2023-01-02 11:00:00+00:00'),
  {'unit': 'celsius', 'location': 'building_a', 'sensor_type': 'PT100',
   'frequency': 'hourly', 'calibrated': '2023-01-15'}),
 (pd.Timestamp('2023-01-03 09:00:00+00:00'),
  {'unit': 'fahrenheit', 'status': 'active'})]

Beyond managing metadata for individual series, you can also discover what metadata keys are used across all series in your refinery instance:

>>> # list all metadata keys in use
>>> tsa.list_metadata_keys()
['calibrated', 'frequency', 'location', 'sensor_type', 'status', 'unit']

Replacing a series entirely#

In specific circumstances, you may need to completely replace a series - for example, when you are working with forecast data and you need only the last updated forecast. The replace method provides this capability:

>>> # create initial series
>>> series = pd.Series([10, 20, 30],
...                    pd.date_range(start=pd.Timestamp(2025, 1, 2),
...                                  freq='D', periods=3))
>>> tsa.update('stock_levels_forecast', series, 'operator@example.com', insertion_date=pd.Timestamp(2025,1,1))

>>> # later, replace the entire series with a new forecast
>>> new_series = pd.Series([70, 50, 60],
...                        pd.date_range(start=pd.Timestamp(2025, 1, 3),
...                                      freq='D', periods=3))
>>> tsa.replace('stock_levels_forecast', new_series, 'admin@example.com', insertion_date=pd.Timestamp(2025,1,2))

>>> tsa.get('stock_levels_forecast')
2025-01-03    70.0
2025-01-04    50.0
2025-01-05    60.0
Freq: D, Name: stock_levels_forecast, dtype: float64

The replace method completely overwrites the series with new data, removing any points not present in the new series.

Note

It’s important to note that replace preserves the complete version history. The replace operation appears as a new insertion date in the series history:

>>> tsa.insertion_dates('stock_levels_forecast')
[pd.Timestamp('2025-01-01 00:00:00+0000', tz='UTC'),  # original update
pd.Timestamp('2025-01-02 00:00:00+0000', tz='UTC')]  # replace operation

>>> # history shows both the original and replaced versions
>>> history = tsa.history('stock_levels_forecast')
>>> for idate, series in history.items():
...     print(f'insertion date: {idate}')
...     print(series)
...
insertion date: 2025-01-01 00:00:00+00:00
2025-01-02    10.0
2025-01-03    20.0
2025-01-04    30.0
Name: stock_levels_forecast, dtype: float64
insertion date: 2025-01-02 00:00:00+00:00
2025-01-03    70.0
2025-01-04    50.0
2025-01-05    60.0
Name: stock_levels_forecast, dtype: float64

This means you can always retrieve previous states of the series before the replace operation using revision_date.

Checking series existence#

To check if a series exists:

>>> tsa.exists('my_series')
True
>>> tsa.exists('non_existent')
False

Renaming a series#

To rename a series:

>>> tsa.rename('old_name', 'new_name')
>>> tsa.exists('old_name')
False
>>> tsa.exists('new_name')
True

Deleting a series#

To remove a series from the database:

>>> tsa.delete('my_series')
>>> tsa.get('my_series')  # returns None

Finding series#

To find series in the database:

>>> # find all series
>>> tsa.find()
['my_series', 'temperature_fr', 'calculated_avg', 'temperature_paris']

>>> # find with metadata
>>> results = tsa.find('(by.name "temperature")', meta=True)
>>> results
['temperature_fr', 'temperature_paris']
>>> # access directly the metadata of found series
>>> results[0].meta
{'unit': 'celsius', 'location': 'fr'}

See the Search Query Language Reference documentation for comprehensive query capabilities.

The older catalog() method is still available but returns everything at once from all sources in a slightly cumbersome structure:

>>> tsa.catalog()
{'local': [('my_series', 'primary'), ('temperature_fr', 'primary'), ('temperature_paris', 'primary')],
 'remote': [('calculated_avg', 'formula')]}

The find() API is generally preferred for its flexibility.

Getting series information#

To get detailed information about a series:

>>> tsa.type('my_series')
'primary'

>>> tsa.interval('my_series')
(pd.Timestamp('2025-01-01'), pd.Timestamp('2025-01-05'))

>>> tsa.source('my_series')
'local'

>>> # get inferred frequency
>>> tsa.inferred_freq('my_series')
'D'  # daily frequency

>>> # get various informations with internal_metadata.
>>> # tzawareness, value type (float, string) and supervision_status
>>> tsa.internal_metadata('my_series')
{'left': '2025-01-01T00:00:00',
'right': '2025-01-05T00:00:00',
'tzaware': False,
'tablename': 'my_series',
'index_type': 'datetime64[ns]',
'value_type': 'float64',
'index_dtype': '<M8[ns]',
'value_dtype': '<f8',
'supervision_status': 'unsupervised'}

Working with logs#

To see the history of operations on a series:

>>> tsa.log('my_series', limit=5)
[{'date': pd.Timestamp('2018-09-26 17:10:36.988920+02:00'),
  'author': 'babar@pythonian.fr',
  'meta': {},
  'rev': 1},
 {'date': pd.Timestamp('2018-09-26 17:12:54.508252+02:00'),
  'author': 'babar@pythonian.fr',
  'meta': {},
  'rev': 2}]

Staircase operations#

The staircase operations are specialized methods for forecast backtesting and time-consistent analysis. They reconstruct series as they were available at specific lead times, which is essential for evaluating forecast accuracy without look-ahead bias.

The staircase method shows what data was available at (value_date - delta). For each value date, it looks back delta time to find what was known at that historical moment:

>>> staircase = tsa.staircase('forecast_series', delta=pd.Timedelta(days=1))

The block_staircase method is more sophisticated. It rebuilds a series from successive blocks of history taken at regular revision intervals. Each block corresponds to data from a specific revision date. This is useful for analyzing how forecasts evolve over time with a consistent publication schedule.

For example, with daily revisions at 10am and a 24-hour maturity offset, the method assembles blocks where each day’s values come from the revision published 24 hours before:

>>> bsc = tsa.block_staircase(
...     name='forecast_series',
...     from_value_date=pd.Timestamp('2020-01-03', tz='utc'),
...     to_value_date=pd.Timestamp('2020-01-05', tz='utc'),
...     revision_freq={'days': 1},
...     revision_time={'hour': 10},
...     revision_tz='UTC',
...     maturity_offset={'hours': 24},
...     maturity_time={'hour': 4}
... )

The result is a series where different time periods come from different revisions, allowing you to see how the forecast performed with a consistent lead time across the entire period.

Time Series Operations API#

The time series API provides comprehensive methods for managing time series data:

class mainsource(uri, namespace='tsh', tshclass=<class 'tshistory.tsio.timeseries'>, othersources=None)

API façade for the main source (talks directly to the storage)

The api documentation is carried by this object. The http client provides exactly the same methods.

Parameters:
  • uri (str)

  • namespace (str)

  • tshclass (type)

update(name, updatets, author, metadata=None, insertion_date=None, keepnans=False, **kw)

Update a series named by <name> with the input pandas series.

This creates a new version of the series. Only the _changes_ between the last version and the provided series are part of the new version.

A series made of the changed points is returned. If there was no change, an empty series is returned and no new version is created.

New points are added, changed points are changed, points with NaN are dropped if keepnans is False (by default) or _erased_ if True.

The author is mandatory. The metadata dictionary allows to associate any metadata with the new series revision.

It is possible to force an insertion_date, which can only be higher than the previous insertion_date.

>>> import pandas as pd
>>> from tshistory.api import timeseries
>>>
>>> tsa = timeseries('postgres://me:password@localhost/mydb')
>>>
>>> series = pd.Series([1, 2, 3],
...                    pd.date_range(start=pd.Timestamp(2017, 1, 1),
...                                  freq='D', periods=3))
# db insertion
>>> tsa.update('my_series', series, 'babar@pythonian.fr')
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Freq: D, Name: my_series, dtype: float64
Parameters:
  • name (str)

  • updatets (Series)

  • author (str)

  • metadata (dict | None)

  • insertion_date (datetime | None)

  • keepnans (bool | None)

Return type:

Series | None

replace(name, replacets, author, metadata=None, insertion_date=None, **kw)

Replace a series named by <name> with the input pandas series.

This creates a new version of the series. The series is completely replaced with the provided values.

The author is mandatory. The metadata dictionary allows to associate any metadata with the new series revision.

It is possible to force an insertion_date, which can only be higher than the previous insertion_date.

Parameters:
  • name (str)

  • replacets (Series)

  • author (str)

  • metadata (dict | None)

  • insertion_date (datetime | None)

Return type:

Series | None

exists(name)

Checks the existence of a series with a given name.

Parameters:

name (str)

Return type:

bool

source(name)

Provide the source name of a series.

When coming from the main source, it returns ‘local’.

Parameters:

name (str)

Return type:

str | None

get(name, revision_date=None, from_value_date=None, to_value_date=None, inferred_freq=False, _keep_nans=False, **kw)

Get a series by name.

By default one gets the latest version.

By specifying revision_date one can get the closest version matching the given date.

The from_value_date and to_value_date parameters permit to specify a narrower date range (by default all points are provided).

If the series does not exists, a None is returned.

>>> tsa.get('my_series')
...
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64
Parameters:
  • name (str)

  • revision_date (datetime | None)

  • from_value_date (datetime | None)

  • to_value_date (datetime | None)

  • inferred_freq (bool)

  • _keep_nans (bool)

Return type:

Series | None

insertion_dates(name, from_insertion_date=None, to_insertion_date=None, from_value_date=None, to_value_date=None, **kw)

Get the list of all insertion dates (as pandas timestamps).

Parameters:
  • name (str)

  • from_insertion_date (datetime | None)

  • to_insertion_date (datetime | None)

  • from_value_date (datetime | None)

  • to_value_date (datetime | None)

history(name, from_insertion_date=None, to_insertion_date=None, from_value_date=None, to_value_date=None, diffmode=False, _keep_nans=False, **kw)

Get all versions of a series in the form of a dict from insertion dates to series version.

It is possible to restrict the versions range by specifying from_insertion_date and to_insertion_date.

It is possible to restrict the values range by specifying from_value_date and to_value_date.

If diffmode is set to True, we don’t get full series values between two consecutive insertion date but only the difference series (with new points, updated points and deleted points). This is typically more costly to compute but can be much more compact, and it encodes the same information as with diffmode set to False.


>>> history = tsa.history('my_series')
...
>>>
>>> for idate, series in history.items(): # it's a dict
...     print('insertion date:', idate)
...     print(series)
...
insertion date: 2018-09-26 17:10:36.988920+02:00
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    3.0
Name: my_series, dtype: float64
insertion date: 2018-09-26 17:12:54.508252+02:00
2017-01-01    1.0
2017-01-02    2.0
2017-01-03    7.0
2017-01-04    8.0
2017-01-05    9.0
Name: my_series, dtype: float64
Parameters:
  • name (str)

  • from_insertion_date (datetime | None)

  • to_insertion_date (datetime | None)

  • from_value_date (datetime | None)

  • to_value_date (datetime | None)

  • diffmode (bool)

  • _keep_nans (bool)

Return type:

Dict[datetime, Series] | None

staircase(name, delta, from_value_date=None, to_value_date=None)

Compute a series whose value dates are the most recent constrained to be delta time after the insertion dates of the series.

This kind of query typically makes sense for forecast series where the relationship between insertion date and value date is sound.

Parameters:
  • name (str)

  • delta (timedelta)

  • from_value_date (datetime | None)

  • to_value_date (datetime | None)

Return type:

Series | None

block_staircase(name, from_value_date=None, to_value_date=None, revision_freq=None, revision_time=None, revision_tz='UTC', maturity_offset=None, maturity_time=None)

Staircase a series by block

This is a more sophisticated and controllable version of the staircase method.

Computes a series rebuilt from successive blocks of history, each linked to a distinct revision date. The revision dates are taken at regular time intervals determined by revision_freq, revision_time and revision_tz. The time lag between revision dates and value dates of each block is determined by maturity_offset and maturity_time.

name: str unique identifier of the series

from_value_date: pandas.Timestamp from which values are retrieved

to_value_date: pandas.Timestamp to which values are retrieved

revision_freq: dict giving revision frequency, of which keys must be taken from

[‘years’, ‘months’, ‘weeks’, ‘bdays’, ‘days’, ‘hours’, ‘minutes’, ‘seconds’] and values as integers. Default is daily frequency, i.e. {‘days’: 1}

revision_time: dict giving revision time, of which keys should be taken from

[‘year’, ‘month’, ‘day’, ‘weekday’, ‘hour’, ‘minute’, ‘second’] and values must be integers. It is only used for revision date initialisation. The next revision dates are then obtained by successively adding revision_freq. Default is {‘hour’: 0}

revision_tz: str giving time zone in which revision date and time are expressed.

Default is ‘UTC’

maturity_offset: dict giving time lag between each revision date and start time

of related block values. Its keys must be taken from [‘years’, ‘months’, ‘weeks’, ‘bdays’, ‘days’, ‘hours’, ‘minutes’, ‘seconds’] and values as integers. Default is {}, i.e. the revision date is the block start date

maturity_time: dict fixing start time of each block, of which keys should be

taken from [‘year’, ‘month’, ‘day’, ‘weekday’, ‘hour’, ‘minute’, ‘second’] and values must be integers. The start date of each block is thus obtained by adding maturity_offset to revision date and then applying maturity_time. Default is {}, i.e. block start date is just the revision date shifted by maturity_offset

Parameters:
  • from_value_date (datetime | None)

  • to_value_date (datetime | None)

  • revision_freq (Dict[str, int] | None)

  • revision_time (Dict[str, int] | None)

  • revision_tz (str)

  • maturity_offset (Dict[str, int] | None)

  • maturity_time (Dict[str, int] | None)

catalog(allsources=True)

Produces a catalog of all series in the form of a mapping from source to a list of (name, kind) pair.

By default it provides the series from all sources.

If allsources is False, only the main source is listed.

Parameters:

allsources (bool)

Return type:

Dict[Tuple[str, str], List[Tuple[str, str]]]

find(query, limit=None, meta=False, _source='local')

Return a list of series descriptors matching the query.

A series descriptor is a string-like object (exhibiting the series name) with additional attributes. If meta has been set to True, the .meta (for normal metadata) and .imeta (for internal metadata) fields will be populated (non None). Lastly, the .source and .kind attributes provides the series source and kind.

Here is an example:

tsa.find(
   '(by.and '
   '  (by.tzaware)'
   '  (by.name "power capacity") '
   '  (by.metakey "plant")'
   '  (by.not (by.or '
   '    (by.metaitem "plant_type" "oil")'
   '    (by.metaitem "plant_type" "coal")))'
   '  (by.metaitem "unit" "mwh")'
   '  (by.metaitem "country" "fr"))'
)

This builds a query for timezone aware series about french power plants (in mwh) which are not of the coal or oil fuel type.

The following filters can be used from the search module:

  • by.tzaware: no parameter, yields time zone aware series names

  • by.name <str>: takes a space separated string of word, yields series names containing the substrings (in order)

  • by.metakey <str>: takes a string, strictly matches all series having this metadata key

  • by.metaitems <str> <str-or-number>: takes a string (key) and an str (or numerical) value and yields all series strictly matching this metadata item

  • by.and: takes a variable number of filters as above to combine them

  • by.or: takes a variable number of filters as above to combine them

  • by.not: produce the negation of a filter

Also inequalities on metadata values can be used:

  • <, <=, >, >=, =: take a string key, a value (str or num)

As in (<= “max_capacity” 900)

Parameters:
  • query (str)

  • limit (int | None)

  • meta (int | None)

  • _source (str | None)

Return type:

List[ts]

interval(name)

Return a pandas interval object which provides the smallest and highest value date of a series.

Parameters:

name (str)

Return type:

Interval

inferred_freq(name, revision_date=None, from_value_date=None, to_value_date=None)

Return a tuple of timedelta, float (between 0 and 1).

The timedelta represents the period (or ‘freq’ in pandas parlance) and the number the quality of the period, which may vary because of the irregularity of the series.

Parameters:
  • name (str)

  • revision_date (Timestamp | None)

  • from_value_date (Timestamp | None)

  • to_value_date (Timestamp | None)

Return type:

Tuple[Timedelta, float] | None

metadata(name, all=None)

Return a series metadata dictionary.

Parameters:
  • name (str)

  • all (bool)

Return type:

Dict[str, Any] | None

internal_metadata(name)

Return a series internal metadata dictionary.

Parameters:

name (str)

Return type:

Dict[str, Any]

replace_metadata(name, metadata)

Replace a series metadata with a dictionary from strings to anything json-serializable.

Parameters:
  • name (str)

  • metadata (dict)

Return type:

None

update_metadata(name, metadata)

Update a series metadata with a dictionary from strings to anything json-serializable.

Parameters:
  • name (str)

  • metadata (dict)

Return type:

None

type(name)

Return the type of a series, for instance ‘primary’ or ‘formula’.

Parameters:

name (str)

Return type:

str

log(name, limit=None, fromdate=None, todate=None)

Return a list of revisions for a given series, in reverse chronological order, with filters.

Revisions are dicts of: * rev: revision id (int) * author: author name * date: timestamp of the revision * meta: the revision metadata

Parameters:
  • name (str)

  • limit (int | None)

  • fromdate (Timestamp | None)

  • todate (Timestamp | None)

Return type:

List[Dict[str, Any]]

rename(currname, newname, propagate=True)

Rename a series.

The target name must be available.

Parameters:
  • currname (str)

  • newname (str)

  • propagate (bool)

Return type:

None

delete(name)

Delete a series.

This is an irreversible operation.

Parameters:

name (str)

strip(name, insertion_date)

Remove revisions after a specific insertion date.

This is an irreversible operation.

Parameters:
  • name (str)

  • insertion_date (datetime)

Return type:

None