tslearn.metrics.dtw¶
- tslearn.metrics.dtw(s1, s2, global_constraint=None, sakoe_chiba_radius=None, itakura_max_slope=None, be=None)[source]¶
Compute Dynamic Time Warping (DTW) similarity measure between (possibly multidimensional) time series and return it.
DTW is computed as the Euclidean distance between aligned time series, i.e., if \(\pi\) is the optimal alignment path:
\[DTW(X, Y) = \sqrt{\sum_{(i, j) \in \pi} \|X_{i} - Y_{j}\|^2}\]Note that this formula is still valid for the multivariate case.
It is not required that both time series share the same size, but they must be the same dimension. DTW was originally presented in [1] and is discussed in more details in our dedicated user-guide page.
- Parameters:
- s1array-like, shape=(sz1, d) or (sz1,)
A time series. If shape is (sz1,), the time series is assumed to be univariate.
- s2array-like, shape=(sz2, d) or (sz2,)
Another time series. If shape is (sz2,), the time series is assumed to be univariate.
- global_constraint{“itakura”, “sakoe_chiba”} or None (default: None)
Global constraint to restrict admissible paths for DTW.
- sakoe_chiba_radiusint or None (default: None)
Radius to be used for Sakoe-Chiba band global constraint. If None and global_constraint is set to “sakoe_chiba”, a radius of 1 is used. If both sakoe_chiba_radius and itakura_max_slope are set, global_constraint is used to infer which constraint to use among the two. In this case, if global_constraint corresponds to no global constraint, a RuntimeWarning is raised and no global constraint is used.
- itakura_max_slopefloat or None (default: None)
Maximum slope for the Itakura parallelogram constraint. If None and global_constraint is set to “itakura”, a maximum slope of 2. is used. If both sakoe_chiba_radius and itakura_max_slope are set, global_constraint is used to infer which constraint to use among the two. In this case, if global_constraint corresponds to no global constraint, a RuntimeWarning is raised and no global constraint is used.
- beBackend object or string or None
Backend. If be is an instance of the class NumPyBackend or the string “numpy”, the NumPy backend is used. If be is an instance of the class PyTorchBackend or the string “pytorch”, the PyTorch backend is used. If be is None, the backend is determined by the input arrays. See our dedicated user-guide page for more information.
- Returns:
- float
Similarity score
See also
References
[1]H. Sakoe, S. Chiba, “Dynamic programming algorithm optimization for spoken word recognition,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 26(1), pp. 43–49, 1978.
Examples
>>> dtw([1, 2, 3], [1., 2., 2., 3.]) 0.0 >>> dtw([1, 2, 3], [1., 2., 2., 3., 4.]) 1.0
The PyTorch backend can be used to compute gradients:
>>> import torch >>> s1 = torch.tensor([[1.0], [2.0], [3.0]], requires_grad=True) >>> s2 = torch.tensor([[3.0], [4.0], [-3.0]]) >>> sim = dtw(s1, s2, be="pytorch") >>> print(sim) tensor(6.4807, grad_fn=<SqrtBackward0>) >>> sim.backward() >>> print(s1.grad) tensor([[-0.3086], [-0.1543], [ 0.7715]])
>>> s1_2d = torch.tensor([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0]], requires_grad=True) >>> s2_2d = torch.tensor([[3.0, 3.0], [4.0, 4.0], [-3.0, -3.0]]) >>> sim = dtw(s1_2d, s2_2d, be="pytorch") >>> print(sim) tensor(9.1652, grad_fn=<SqrtBackward0>) >>> sim.backward() >>> print(s1_2d.grad) tensor([[-0.2182, -0.2182], [-0.1091, -0.1091], [ 0.5455, 0.5455]])