Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rotated data #64

Merged
merged 4 commits into from
Apr 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/test-and-lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ jobs:
test:
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11"]
python-version: ["3.9", "3.10", "3.11"]
os: [ windows-latest, ubuntu-latest ]
runs-on: ${{ matrix.os }}

Expand All @@ -21,14 +21,14 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies (all)
if : ${{ matrix.python-version == '3.8' || matrix.python-version == '3.9' }}
if : ${{ matrix.python-version == '3.9' }}
run: |
python -m pip install --upgrade pip
pip install poetry
poetry config virtualenvs.in-project true
poetry install -E all
- name: Install dependencies (no hmm)
if : ${{ matrix.python-version != '3.8' && matrix.python-version != '3.9' }}
if : ${{ matrix.python-version != '3.9' }}
run: |
python -m pip install --upgrade pip
pip install poetry
Expand Down
2 changes: 1 addition & 1 deletion .ruff.toml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
line-length = 120
target-version = "py38"
target-version = "py39"

[lint]
select = [
Expand Down
17 changes: 12 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ project.

## [Unreleased]

### Added

- All orientation and trajectory methods now have a new parameter `rotated_data_` that provides the input data rotated
to the world frame based on the calculated orientation.
(https://github.com/mad-lab-fau/gaitmap/pull/64)

### Fixed

- Fixed a bug that when using `merge_interval` with empty input of shape (0, 2), the output was not empty.
Expand All @@ -22,6 +28,7 @@ project.
- Changed resampling function in inverse feature transform of HMM.
Resampling of state sequence is now also possible if the `target_sample_rate` is not a multiple of the HMM
sampling rate, e.g. `target_sample_rate=200`, `sample_rate=52.1` (https://github.com/mad-lab-fau/gaitmap/pull/62)
- Dropped Python 3.8 support! (https://github.com/mad-lab-fau/gaitmap/pull/64)

## [2.3.0] - 2023-08-03

Expand All @@ -41,14 +48,14 @@ project.

### Fixed

- Fixed bug in HMM when uneven sequence length were provided. In newer numpy versions this requires an explicit cast to
- Fixed bug in HMM when uneven sequnece length were provided. In newer numpy versions this requires an explicit cast to
an object array.

## [2.2.1] - 2023-06-22

### Fixed

- Fixed edge case where the output of the stride event method had the events in the wrong order for some strides.
- Fixed edecase where the output of the stride event method had the events in the wrong order for some strides.
The reason for that is that a valid segmented stridelist does not always result in a valid min_vel_event list for
algorithms that are allowed to search outside the segmented stride region (e.g. `HerzerEventDetection`).
We now check for consistency again after the stride list conversion.
Expand All @@ -64,11 +71,11 @@ Gaitmap is now available as official PyPi package!!!
(https://github.com/mad-lab-fau/gaitmap/pull/15)
- Certain ZUPT detectors now return the `min_vel_index_` and `min_vel_value_` as additional attributes.
These values represent the index in the input data with the lowest velocity and the corresponding velocity value
(according to the internal metric of the respective ZUPT detector).
(according to the internal metric of the repective ZUPT detector).
(https://github.com/mad-lab-fau/gaitmap/pull/16)
- New example explaining more advanced usage of the `RtsKalman` algorithm.
(https://github.com/mad-lab-fau/gaitmap/pull/17)
- The `find_extrema_in_radius` and the `snap_to_min` utility functions gained the ability to define asymmetric search
- The `find_extrema_in_radius` and the `snap_to_min` utility functions gained the ability to define asymetric search
windows around the search indices.
(https://github.com/mad-lab-fau/gaitmap/pull/21)
- Temporal and Spatial Parameter calculation have new options to work with ic-stride lists and with partial input
Expand All @@ -81,7 +88,7 @@ Gaitmap is now available as official PyPi package!!!
### Changed

- We now require Pandas >2.0 as we are using the new pandas dtypes.
It could be that this will require you to perform some explicit type conversion in your code.
It could be that this will require you to perform some sxplicit type conversion in your code.
- The Zupt Detector example is updated to use newer tpcp features
(https://github.com/mad-lab-fau/gaitmap/pull/17)
- The column order of the Spatial Parameter Calculation output has been changed
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ You can track the progress in the [pull request](https://github.com/mad-lab-fau/

### Supported Python versions

*gaitmap* is tested against Python 3.8 and 3.9 at the moment.
We expect most features to work with all Python versions >= 3.8, but because of some known issues
*gaitmap* is tested against Python 3.9 at the moment.
We expect most features to work with all Python versions >= 3.9, but because of some known issues
(see specific features above) we do not officially support them.

## Working with Algorithms
Expand Down
4 changes: 2 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
from datetime import datetime
from inspect import getsourcefile
from pathlib import Path
from typing import List, Optional
from typing import Optional

import toml
from sphinx_gallery.sorting import ExplicitOrder
Expand Down Expand Up @@ -245,7 +245,7 @@ def skip_properties(app, what, name, obj, skip, options) -> Optional[bool]:
"""


def add_info_about_origin(app, what, name, obj, options, lines: List[str]) -> None:
def add_info_about_origin(app, what, name, obj, options, lines: list[str]) -> None:
"""Add a short info text to all algorithms that are only available via gaitmap_mad."""
if what != "class":
return
Expand Down
4 changes: 2 additions & 2 deletions examples/advanced_features/multi_process.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"""

from pprint import pprint
from typing import Any, Dict
from typing import Any

# %%
# Load some example data
Expand Down Expand Up @@ -70,7 +70,7 @@
# This could be further optimized by using a read-only shared memory object for the data.


def run(dtw: BarthDtw, parameter: Dict[str, Any]) -> BarthDtw:
def run(dtw: BarthDtw, parameter: dict[str, Any]) -> BarthDtw:
# For this run, change the parameters on the dtw object
dtw = dtw.set_params(**parameter)
dtw = dtw.segment(data=bf_data, sampling_rate_hz=sampling_rate_hz)
Expand Down
4 changes: 2 additions & 2 deletions examples/datasets_and_pipelines/custom_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
# Then you can filter the dataset first and load the data once you know which data-points you want to access.
# We will discuss this later in the example.
from itertools import product
from typing import List, Optional, Union
from typing import Optional, Union

import pandas as pd

Expand Down Expand Up @@ -329,7 +329,7 @@ def __init__(
data_folder: str,
custom_config_para: bool = False,
*,
groupby_cols: Optional[Union[List[str], str]] = None,
groupby_cols: Optional[Union[list[str], str]] = None,
subset_index: Optional[pd.DataFrame] = None,
) -> None:
self.data_folder = data_folder
Expand Down
18 changes: 9 additions & 9 deletions gaitmap/_event_detection_common/_event_detection_mixin.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""Mixin for event detection algorithms that work similar to Rampp et al."""

from typing import Any, Callable, Dict, Optional, Tuple, Union
from typing import Any, Callable, Optional, Union

import numpy as np
import pandas as pd
Expand Down Expand Up @@ -30,10 +30,10 @@
class _EventDetectionMixin:
memory: Optional[Memory]
enforce_consistency: bool
detect_only: Optional[Tuple[str, ...]]
detect_only: Optional[tuple[str, ...]]

min_vel_event_list_: Optional[Union[pd.DataFrame, Dict[str, pd.DataFrame]]]
segmented_event_list_: Optional[Union[pd.DataFrame, Dict[str, pd.DataFrame]]]
min_vel_event_list_: Optional[Union[pd.DataFrame, dict[str, pd.DataFrame]]]
segmented_event_list_: Optional[Union[pd.DataFrame, dict[str, pd.DataFrame]]]

data: SensorData
sampling_rate_hz: float
Expand All @@ -43,7 +43,7 @@ def __init__(
self,
memory: Optional[Memory] = None,
enforce_consistency: bool = True,
detect_only: Optional[Tuple[str, ...]] = None,
detect_only: Optional[tuple[str, ...]] = None,
) -> None:
self.memory = memory
self.enforce_consistency = enforce_consistency
Expand Down Expand Up @@ -85,7 +85,7 @@ def detect(self, data: SensorData, stride_list: StrideList, *, sampling_rate_hz:
if dataset_type == "single":
results = self._detect_single_dataset(data, stride_list, detect_kwargs=detect_kwargs, memory=self.memory)
else:
results_dict: Dict[_Hashable, Dict[str, pd.DataFrame]] = {}
results_dict: dict[_Hashable, dict[str, pd.DataFrame]] = {}
for sensor in get_multi_sensor_names(data):
results_dict[sensor] = self._detect_single_dataset(
data[sensor],
Expand All @@ -104,9 +104,9 @@ def _detect_single_dataset(
self,
data: pd.DataFrame,
stride_list: pd.DataFrame,
detect_kwargs: Dict[str, Any],
detect_kwargs: dict[str, Any],
memory: Memory,
) -> Dict[str, pd.DataFrame]:
) -> dict[str, pd.DataFrame]:
"""Detect gait events for a single sensor data set and put into correct output stride list."""
if memory is None:
memory = Memory(None)
Expand Down Expand Up @@ -168,7 +168,7 @@ def _select_all_event_detection_method(self) -> Callable:
"""
raise NotImplementedError()

def _get_detect_kwargs(self) -> Dict[str, Any]:
def _get_detect_kwargs(self) -> dict[str, Any]:
"""Return a dictionary of keyword arguments that should be passed to the detect method.

This is a separate method to make it easy to overwrite by a subclass.
Expand Down
23 changes: 16 additions & 7 deletions gaitmap/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

import json
import warnings
from typing import Any, Dict, Optional, Type, TypeVar, Union
from typing import Any, Optional, TypeVar, Union

import numpy as np
import pandas as pd
Expand All @@ -22,6 +22,7 @@
StrideList,
VelocityList,
)
from gaitmap.utils.rotations import rotate_dataset_series

BaseType = TypeVar("BaseType", bound="_BaseSerializable") # pylint: disable=invalid-name

Expand Down Expand Up @@ -114,27 +115,27 @@ def _custom_deserialize(json_obj): # pylint: disable=too-many-return-statements

class _BaseSerializable(tpcp.BaseTpcpObject):
@classmethod
def _get_subclasses(cls: Type[Self]):
def _get_subclasses(cls: type[Self]):
for subclass in cls.__subclasses__():
yield from subclass._get_subclasses()
yield subclass

@classmethod
def _find_subclass(cls: Type[Self], name: str) -> Type[Self]:
def _find_subclass(cls: type[Self], name: str) -> type[Self]:
for subclass in _BaseSerializable._get_subclasses():
if subclass.__name__ == name:
return subclass
raise ValueError(f"No algorithm class with name {name} exists")

@classmethod
def _from_json_dict(cls: Type[Self], json_dict: Dict) -> Self:
def _from_json_dict(cls: type[Self], json_dict: dict) -> Self:
params = json_dict["params"]
input_data = {k: params[k] for k in tpcp.get_param_names(cls) if k in params}
instance = cls(**input_data)
return instance

def _to_json_dict(self) -> Dict[str, Any]:
json_dict: Dict[str, Union[str, Dict[str, Any]]] = {
def _to_json_dict(self) -> dict[str, Any]:
json_dict: dict[str, Union[str, dict[str, Any]]] = {
"_gaitmap_obj": self.__class__.__name__,
"params": self.get_params(deep=False),
}
Expand All @@ -154,7 +155,7 @@ def to_json(self) -> str:
return json.dumps(final_dict, indent=4, cls=_CustomEncoder)

@classmethod
def from_json(cls: Type[Self], json_str: str) -> Self:
def from_json(cls: type[Self], json_str: str) -> Self:
"""Import an gaitmap object from its json representation.

For details have a look at the this :ref:`example <algo_serialize>`.
Expand Down Expand Up @@ -229,13 +230,21 @@ class BaseOrientationMethod(BaseAlgorithm):
_action_methods = ("estimate",)
orientation_object_: Rotation

data: SingleSensorData
sampling_rate_hz: float

@property
def orientation_(self) -> SingleSensorOrientationList:
"""Orientations as pd.DataFrame."""
df = pd.DataFrame(self.orientation_object_.as_quat(), columns=GF_ORI)
df.index.name = "sample"
return df

@property
def rotated_data_(self) -> SingleSensorData:
"""Rotated data."""
return rotate_dataset_series(self.data, self.orientation_object_[:-1])

def estimate(
self,
data: SingleSensorData,
Expand Down
19 changes: 10 additions & 9 deletions gaitmap/data_transform/_base.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
"""Basic transformers for higher level functionality."""

from collections.abc import Sequence
from copy import copy
from functools import reduce
from typing import List, Sequence, Set, Tuple, Union
from typing import Union

import pandas as pd
from tpcp import OptimizableParameter, PureParameter
Expand Down Expand Up @@ -90,14 +91,14 @@ class GroupedTransformer(BaseTransformer, TrainableTransformerMixin):

"""

transformer_mapping: OptimizableParameter[List[Tuple[Union[_Hashable, Tuple[_Hashable, ...]], BaseTransformer]]]
transformer_mapping: OptimizableParameter[list[tuple[Union[_Hashable, tuple[_Hashable, ...]], BaseTransformer]]]
keep_all_cols: PureParameter[bool]

data: SingleSensorData

def __init__(
self,
transformer_mapping: List[Tuple[Union[_Hashable, Tuple[_Hashable, ...]], BaseTransformer]],
transformer_mapping: list[tuple[Union[_Hashable, tuple[_Hashable, ...]], BaseTransformer]],
keep_all_cols: bool = True,
) -> None:
self.transformer_mapping = transformer_mapping
Expand Down Expand Up @@ -181,7 +182,7 @@ def transform(self, data: SingleSensorData, **kwargs) -> Self:
self.transformed_data_ = pd.concat(results, axis=1)[sorted(mapped_cols, key=list(data.columns).index)]
return self

def _validate_mapping(self) -> Set[_Hashable]:
def _validate_mapping(self) -> set[_Hashable]:
# Check that each column is only mentioned once:
unique_k = []
for k, _ in self.transformer_mapping:
Expand All @@ -196,7 +197,7 @@ def _validate_mapping(self) -> Set[_Hashable]:
unique_k.append(i)
return set(unique_k)

def _validate(self, data: SingleSensorData, selected_cols: Set[_Hashable]) -> None:
def _validate(self, data: SingleSensorData, selected_cols: set[_Hashable]) -> None:
if not set(data.columns).issuperset(selected_cols):
raise ValueError("You specified transformations for columns that do not exist. This is not supported!")

Expand Down Expand Up @@ -237,9 +238,9 @@ class ChainedTransformer(BaseTransformer, TrainableTransformerMixin):

_composite_params = ("chain",)

chain: OptimizableParameter[List[Tuple[_Hashable, BaseTransformer]]]
chain: OptimizableParameter[list[tuple[_Hashable, BaseTransformer]]]

def __init__(self, chain: List[Tuple[_Hashable, BaseTransformer]]) -> None:
def __init__(self, chain: list[tuple[_Hashable, BaseTransformer]]) -> None:
self.chain = chain

def self_optimize(self, data: Sequence[SingleSensorData], **kwargs) -> Self:
Expand Down Expand Up @@ -332,9 +333,9 @@ class ParallelTransformer(BaseTransformer, TrainableTransformerMixin):

_composite_params = ("transformers",)

transformers: OptimizableParameter[List[Tuple[_Hashable, BaseTransformer]]]
transformers: OptimizableParameter[list[tuple[_Hashable, BaseTransformer]]]

def __init__(self, transformers: List[Tuple[_Hashable, BaseTransformer]]) -> None:
def __init__(self, transformers: list[tuple[_Hashable, BaseTransformer]]) -> None:
self.transformers = transformers

def self_optimize(self, data: Sequence[SingleSensorData], **kwargs) -> Self:
Expand Down
6 changes: 3 additions & 3 deletions gaitmap/data_transform/_filter.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""A set of filters that can be applied to data."""

from typing import Literal, Optional, Tuple, Union
from typing import Literal, Optional, Union

import pandas as pd
from scipy.signal import butter, sosfiltfilt
Expand Down Expand Up @@ -73,15 +73,15 @@ class ButterworthFilter(BaseFilter):
"""

order: int
cutoff_freq_hz: Union[float, Tuple[float, float]]
cutoff_freq_hz: Union[float, tuple[float, float]]
filter_type: Literal["lowpass", "highpass", "bandpass", "bandstop"]

sampling_rate_hz: float

def __init__(
self,
order: int,
cutoff_freq_hz: Union[float, Tuple[float, float]],
cutoff_freq_hz: Union[float, tuple[float, float]],
filter_type: Literal["lowpass", "highpass", "bandpass", "bandstop"] = "lowpass",
) -> None:
self.order = order
Expand Down
Loading
Loading