Skip to content

Dev #49

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
May 3, 2024
Merged

Dev #49

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
a98584d
updated comments
pydemull May 1, 2024
3970f9c
solved the problem of NAs generated in the 'wearing' column when data…
pydemull May 1, 2024
2323848
adjusted code so that all graphics related to activity accumulation m…
pydemull May 1, 2024
5f734a6
corrected an error in the computation of MX5
pydemull May 1, 2024
39d8104
modified settings to compute the intensity bands relating to the inte…
pydemull May 1, 2024
e22ed99
updated documentation for MX metrics
pydemull May 1, 2024
835c517
updated ga workflow for test coverage
pydemull May 1, 2024
b307c2a
replaced 'legend.position' argument by 'legend.position.inside' becau…
pydemull May 1, 2024
ef78e1e
moved from the 'legend.position' to 'legend.position.inside' argument…
pydemull May 1, 2024
89393a6
updated doc
pydemull May 1, 2024
eaf3dc4
updated workflow
pydemull May 1, 2024
7398516
updated ci covr
pydemull May 1, 2024
45bf91a
updated covr ci
pydemull May 1, 2024
04b4d14
added shinytest2 to covr workflow
pydemull May 1, 2024
0f7e21e
updated test coverage
pydemull May 2, 2024
e5a41ec
updated test-coverage
pydemull May 2, 2024
6cea72f
update test coverage new attempt
pydemull May 2, 2024
c159760
ga workflows update: new attempt
pydemull May 3, 2024
51cf908
ga workflows update: new attempt
pydemull May 3, 2024
ddacbb1
updated rcmd-check
pydemull May 3, 2024
1960cf7
updated workflows
pydemull May 3, 2024
4d7dabb
updated the remaining 'legend.position' arguments to 'legend.position…
pydemull May 3, 2024
375306d
updated doc of the 'do_all_analyses' internal function
pydemull May 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .Rbuildignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,4 @@ $run_dev.*
^Meta$
_\.new\.png$
^temp$
^\.github$
30 changes: 8 additions & 22 deletions .github/workflows/R-CMD-check.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Workflow derived from https://github.com/r-lib/actions/tree/master/examples
# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help
on:
push:
Expand All @@ -19,16 +19,15 @@ jobs:
matrix:
config:
- {os: windows-latest, r: 'release'}
- {os: ubuntu-latest, r: 'release'}
- {os: windows-latest, r: 'devel'}
- {os: ubuntu-latest, r: 'devel'}
- {os: ubuntu-latest, r: 'devel', http-user-agent: 'release'}
- {os: ubuntu-latest, r: 'oldrel-1'}

env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes

steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4

- uses: r-lib/actions/setup-pandoc@v2

Expand All @@ -37,26 +36,13 @@ jobs:
r-version: ${{ matrix.config.r }}
http-user-agent: ${{ matrix.config.http-user-agent }}
use-public-rspm: true

- name: Install sessioninfo
run: |
install.packages("sessioninfo")
shell: Rscript {0}

- uses: r-lib/actions/setup-r-dependencies@v2
with:
extra-packages: any::rcmdcheck

- uses: r-lib/actions/check-r-package@v2

- name: Show testthat output
if: always()
run: find check -name 'testthat.Rout*' -exec cat '{}' \; || true
shell: bash
needs: check

- name: Upload check results
if: failure()
uses: actions/upload-artifact@main
- uses: r-lib/actions/check-r-package@v2
with:
name: ${{ runner.os }}-r${{ matrix.config.r }}-results
path: check
upload-snapshots: true
build_args: 'c("--no-manual","--compact-vignettes=gs+qpdf")'
54 changes: 29 additions & 25 deletions .github/workflows/test-coverage.yaml
Original file line number Diff line number Diff line change
@@ -1,46 +1,50 @@
# Workflow derived from https://github.com/rstudio/shinytest2/tree/main/actions/test-app/example-test-app-description.yaml
# Workflow derived from https://github.com/r-lib/actions/tree/v2/examples
# Need help debugging build failures? Start at https://github.com/r-lib/actions#where-to-find-help
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]

name: Test app w/ {renv}
name: test-coverage

jobs:
test-app:
runs-on: ${{ matrix.config.os }}

name: ${{ matrix.config.os }} (${{ matrix.config.r }})

strategy:
fail-fast: false
matrix:
config:
- {os: ubuntu-latest, r: release}

test-coverage:
runs-on: windows-latest
env:
GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
R_KEEP_PKG_SOURCE: yes

steps:
- uses: actions/checkout@v3

- uses: r-lib/actions/setup-pandoc@v2
- uses: actions/checkout@v4

- uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.config.r }}
use-public-rspm: true

- uses: r-lib/actions/setup-renv@v2

- uses: rstudio/shinytest2/actions/test-app@actions/v1
- uses: r-lib/actions/setup-r-dependencies@v2
with:
app-dir: "."

extra-packages: any::covr
needs: coverage

- name: Test coverage
run: covr::codecov(quiet = FALSE, clean = FALSE)
run: |
covr::codecov(
quiet = FALSE,
clean = FALSE,
install_path = file.path(normalizePath(Sys.getenv("RUNNER_TEMP"), winslash = "/"), "package")
)
shell: Rscript {0}


- name: Show testthat output
if: always()
run: |
## --------------------------------------------------------------------
find '${{ runner.temp }}/package' -name 'testthat.Rout*' -exec cat '{}' \; || true
shell: bash

- name: Upload test results
if: failure()
uses: actions/upload-artifact@v4
with:
name: coverage-test-failures
path: ${{ runner.temp }}/package
59 changes: 30 additions & 29 deletions R/compute_accumulation_metrics.R
Original file line number Diff line number Diff line change
Expand Up @@ -98,18 +98,16 @@ compute_accumulation_metrics <- function(
zoom_to = "23:59:59"
){

# Filtering data based on selected dates and time periods, and adding a
# column containing "SED", NON-SED", or "Nonwear" labels
# Getting selected dates
if (is.null(dates)) {
selected_dates <- attributes(as.factor(data$date))$levels
} else {
selected_dates <- attributes(as.factor(dates))$levels
}

# Fix bug: Convert Nas to "Nonwear"
data[[col_cat_int]] <- dplyr::if_else(is.na(data[[col_cat_int]]), "Nonwear", data[[col_cat_int]])

data <-
# Filtering data based on selected dates and time periods, and adding a
# column 'new_intensity_category' containing "SED", PA, or "Nonwear" values
data <-
data %>%
dplyr::filter(
date %in% as.Date(selected_dates) &
Expand All @@ -127,28 +125,28 @@ if (is.null(dates)) {
"SED",
dplyr::if_else(
.data[[col_cat_int]] == "Nonwear",
"Nonwear",
dplyr::if_else(is.na(.data[[col_cat_int]]), "Nonwear", "Nonwear")
"Nonwear", NA
)
)
)
)

# Updating bouts IDs
# Setting IDs for the new PA/SED/Nonwear bouts
data$new_intensity_category <- as.factor(data$new_intensity_category)
data$new_intensity_category_num <- as.numeric(as.character(forcats::fct_recode(data$new_intensity_category , "0" = "Nonwear", "1" = "SED", "2" = "PA")))
data$new_bout <- cumsum(c(1, as.numeric(diff(data$new_intensity_category_num))!= 0))

# Getting arguments
# Getting the type of activity bout to be analyzed
behaviour <- match.arg(behaviour)
if(behaviour == "sed") {BEHAV <- "SED"; color_fill = c("#D9DBE5", "#A6ADD5", "#6A78C3", "#3F51B5"); auto_text = "sedentary"}
if(behaviour == "pa") {BEHAV <- "PA"; color_fill = c("#EDD3DD", "#F38DB6", "#FA3B87", "#FF0066"); auto_text = "physical activity"}

# Getting correction factor related to the epoch length (reference epoch = 60 s);
# Getting the correction factor related to the epoch length (reference epoch = 60 s);
# bout durations are computed in minutes
cor_factor = 60 / (as.numeric(data[[col_time]][2] - data[[col_time]][1]))

# Summarising bout durations (in minutes) of interest by day
# Getting all the identified bouts of the kind of interest and their respective
# durations (in minutes)
recap_bouts_by_day <-
data %>%
dplyr::group_by(date, new_bout, new_intensity_category) %>%
Expand All @@ -166,7 +164,7 @@ recap_bouts_by_day <-
)


# Computing mean daily number of breaks
# Computing the mean of the daily number of breaks
mean_breaks <-
recap_bouts_by_day %>%
dplyr::ungroup(new_bout, new_intensity_category) %>%
Expand Down Expand Up @@ -206,19 +204,21 @@ mean_breaks <-
fill = dur_cat
)
) +
geom_rect(aes(
annotate(
geom = "rect",
xmin = hms::as_hms(0),
xmax = hms::as_hms(valid_wear_time_start),
ymin = -Inf,
ymax = Inf),
ymax = Inf,
color = "grey",
fill = "grey"
) +
geom_rect(aes(
annotate(
geom = "rect",
xmin = hms::as_hms(valid_wear_time_end),
xmax = hms::as_hms("23:59:59"),
ymin = -Inf,
ymax = Inf),
ymax = Inf,
color = "grey",
fill = "grey"
) +
Expand Down Expand Up @@ -266,7 +266,8 @@ mean_breaks <-
geom_vline(aes(xintercept = 3600*22), linetype = "dotted", color = "grey50") +
geom_vline(aes(xintercept = 3600*23), linetype = "dotted", color = "grey50")

# Summarising bout durations (in minutes) of interest without grouping by day
# Getting all the identified bouts of the kind of interest and their respective
# durations (in minutes) without grouping by day
recap_bouts <-
data %>%
dplyr::group_by(new_bout, new_intensity_category) %>%
Expand Down Expand Up @@ -308,7 +309,7 @@ summarised_bouts <-
)


# Fitting cumulated fraction of time vs bout duration
# Fitting cumulated fraction of time vs bout duration relationship
model <- nls(
cum_frac_time ~ duration^x / (duration^x + UBD^x),
data = summarised_bouts,
Expand All @@ -333,26 +334,26 @@ max_bout_duration <- max(summarised_bouts$duration)
duration = seq(xmin, max_bout_duration, 0.1)
) %>%
dplyr::mutate(
pred = duration ^ (-alpha) ,
pred = duration ^ (-alpha),
pred = duration ^ (-alpha) / max(pred) * max(summarised_bouts$n, na.rm = TRUE)
)

# Building the graphic
p_alpha <-
ggplot(data = recap_bouts) +
ggplot(data = recap_bouts |> dplyr::ungroup(new_bout)) +
geom_histogram(aes(x = duration, fill = dur_cat), binwidth = xmin) +
scale_fill_manual(values = color_fill) +
labs(x = "Bout duration (min)", y = "n", fill = "Duration (min)") +
geom_line(data = df_pred_alpha, aes(x = duration, y = pred), linewidth = 0.8, color = "grey10") +
geom_line(data = df_pred_alpha, aes(x = duration, y = pred), linewidth = 0.5, color = "grey10") +
annotate("text", x = max_bout_duration/2, y = max(summarised_bouts$n, na.rm = TRUE)/2, label = paste("alpha =", round(alpha, 2)), hjust = 0.5, size = 6, vjust = 0.5) +
theme_bw() +
theme(legend.position = "bottom")

# Building a graphic for MBD
p_MBD <-
ggplot(data = recap_bouts) +
ggplot(data = recap_bouts |> dplyr::ungroup(new_bout)) +
geom_histogram(aes(x = duration, fill = dur_cat), binwidth = xmin) +
geom_segment(aes(x = MBD, xend = MBD, y = 0, yend = max(summarised_bouts$n, na.rm = TRUE)), linetype = "dashed") +
annotate(geom = "segment", x = MBD, xend = MBD, y = 0, yend = max(summarised_bouts$n, na.rm = TRUE), linetype = "dashed") +
scale_fill_manual(values = color_fill) +
labs(x = "Bout duration (min)", y = "n", fill = "Duration (min)") +
geom_segment(
Expand All @@ -377,10 +378,10 @@ p_MBD <-
p_UBD <-
ggplot(data = summarised_bouts, aes(x = duration, y = cum_frac_time)) +
geom_point(aes(color = dur_cat), size = 6) +
geom_segment(aes(x = 0, y = 0.5, xend = UBD, yend = 0.5), linetype = "dashed", linewidth = 0.5) +
geom_segment(aes(x = UBD, y = 0.5, xend = UBD, yend = 0), linetype = "dashed", linewidth = 0.5) +
geom_line(data = df_pred_UBD, aes(x = duration, y = pred), linewidth = 0.8, color = "grey10") +
geom_segment(aes(x = max_bout_duration/2, y = 0.4, xend = UBD, yend = 0), arrow = arrow(length = unit(0.02, "npc"))) +
annotate(geom = "segment", x = 0, y = 0.5, xend = UBD, yend = 0.5, linetype = "dashed", linewidth = 0.5) +
annotate(geom = "segment", x = UBD, y = 0.5, xend = UBD, yend = 0, linetype = "dashed", linewidth = 0.5) +
geom_line(data = df_pred_UBD, aes(x = duration, y = pred), linewidth = 0.5, color = "grey10") +
annotate(geom = "segment", x = max_bout_duration/2, y = 0.4, xend = UBD, yend = 0, arrow = arrow(length = unit(0.02, "npc"))) +
annotate("text", x = max_bout_duration/2, y = 0.4, label = paste(" UBD =", round(UBD, 1), "min"), hjust = 0, size = 6, vjust = 0) +
labs(x = "Bout duration (min)", y = paste("Cumulated fraction of total", auto_text, "time"), color = "Duration (min)") +
scale_color_manual(values = color_fill) +
Expand Down Expand Up @@ -417,7 +418,7 @@ p_gini <-
geom_ribbon(aes(x = cum_frac_bout, ymin = cum_frac_time, ymax = cum_frac_bout), fill = alpha(color_fill[[2]], 0.3)) +
geom_point(data = summarised_bouts2, aes(color = dur_cat), size = 6) +
geom_segment(x = 0, xend = 1, y = 0, yend = 1, linewidth = 0.3) +
geom_line(linewidth = 0.6) +
geom_line(linewidth = 0.5) +
scale_color_manual(values = color_fill) +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
labs(
Expand Down
2 changes: 1 addition & 1 deletion R/compute_intensity_distri_metrics.R
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ df_mx <-
M60 = compute_mx(x = .data[[col_axis]], n = 60 * cor_factor),
M30 = compute_mx(x = .data[[col_axis]], n = 30 * cor_factor),
M15 = compute_mx(x = .data[[col_axis]], n = 15 * cor_factor),
M5 = compute_mx(x = .data[[col_axis]], n = 15 * cor_factor)
M5 = compute_mx(x = .data[[col_axis]], n = 5 * cor_factor)
)


Expand Down
2 changes: 1 addition & 1 deletion R/create_fig_mx_summary.R
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ p <-
axis.text.x = element_text(size = 15),
legend.text = element_text(size = 12),
legend.key.width = unit(1.5,"cm"),
legend.position = c(0.13, 0.07)
legend.position.inside = c(0.13, 0.07)
) +
guides(color = "none", fill = "none")

Expand Down
6 changes: 3 additions & 3 deletions R/create_fig_pal.R
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ if (language == "en") {
theme(axis.ticks = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_text(size = 13),
legend.position = c(0.5, 1.4),
legend.position.inside = c(0.5, 1.4),
legend.title = element_text(face = "bold" , size = 10),
legend.text = element_text(face = "bold", size = 17),
legend.background = element_rect(fill = "beige"),
Expand Down Expand Up @@ -105,7 +105,7 @@ if (language == "fr") {
theme(axis.ticks = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_text(size = 13),
legend.position = c(0.5, 1.4),
legend.position.inside = c(0.5, 1.4),
legend.title = element_text(face = "bold" , size = 10),
legend.text = element_text(face = "bold", size = 17),
legend.background = element_rect(fill = "beige"),
Expand Down Expand Up @@ -151,7 +151,7 @@ if (language == "de") {
theme(axis.ticks = element_blank(),
axis.text.y = element_blank(),
axis.text.x = element_text(size = 13),
legend.position = c(0.5, 1.4),
legend.position.inside = c(0.5, 1.4),
legend.title = element_text(face = "bold" , size = 10),
legend.text = element_text(face = "bold", size = 17),
legend.background = element_rect(fill = "beige"),
Expand Down
3 changes: 3 additions & 0 deletions R/do_all_analyses.R
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@
#' the package. It is an internal function allowing the computation of the speed of the whole
#' analysis process, from the data importation to the final line of the results.
#'
#' @param to_epoch A numeric value to set the epoch required to collapse counts
#' in seconds.
#'
#' @return
#' A dataset (1 row) with all computed metrics.

Expand Down
2 changes: 1 addition & 1 deletion R/get_ig_results.R
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ get_ig_results <- function(
df_bins$bin_label <- paste0(round(df_bins$bin_start, 0),"-", round(df_bins$bin_end, 0))

# Correcting the value of the upper bound of the last bin (the value has been arbitrarily set so that it is very high)
df_bins[nrow(df_bins), "bin_end"] <- 50000
df_bins[nrow(df_bins), "bin_end"] <- 1000000

# Correcting the label of the last bin
df_bins[nrow(df_bins), "bin_label"] <- paste0(">", round(df_bins[nrow(df_bins), "bin_start"]-1, 0))
Expand Down
2 changes: 2 additions & 0 deletions R/mark_wear_time.R
Original file line number Diff line number Diff line change
Expand Up @@ -108,10 +108,12 @@ mark_wear_time <- function(
streamFrame = streamFrame
) %>%
dplyr::mutate(
wearing = dplyr::if_else(is.na(wearing), "nw", wearing),
non_wearing_count = dplyr::if_else(wearing == "nw", 1, 0),
wearing_count = dplyr::if_else(wearing == "w", 1, 0)
)


return(df2)

}
Loading
Loading