split IDs into groups to use for subsequent plotting

chunk

ids_per_plot(id, id_per_plot = 9)

chunk(.x, .nchunk = parallel::detectCores())

chunk_grp(.x, .nchunk = parallel::detectCores())

chunk_list(.x, .nchunk = parallel::detectCores())

chunk_grp_list(.x, .nchunk = parallel::detectCores())

Arguments

id

vector of ids (eg id column)

id_per_plot

number of ids per plot. Default to 9

.x

vector of values

.nchunk

number of chunks to identify

Details

works very well with hadley wickham's purrr package to create a column to split on then subsequently plot, see vignette("Multiplot") for details

chunk by group, unique values, and return as a vector or a list with elememts

Examples

#chunking will provide the chunk index by splitting the data as evenly as possible # into the number chunks specified letters[1:9]
#> [1] "a" "b" "c" "d" "e" "f" "g" "h" "i"
chunk(letters[1:9], 3)
#> [1] 1 1 1 2 2 2 3 3 3
letters[c(1, 1, 1:7)]
#> [1] "a" "a" "a" "b" "c" "d" "e" "f" "g"
chunk(letters[c(1, 1, 1:7)], 3)
#> [1] 1 1 1 2 2 2 3 3 3
# sometimes you want to evenly chunk by unique values rather than purely balancing chunk_grp(c(1, 1, 1:7), 3)
#> [1] 1 1 1 1 1 2 2 3 3
# a next step after chunking is splitting into a list, so this does thus for you # chunk list will both split the data and keep the original values chunk_list(letters[1:9], 3)
#> [[1]] #> [1] "a" "b" "c" #> #> [[2]] #> [1] "d" "e" "f" #> #> [[3]] #> [1] "g" "h" "i" #>
chunk_list(c(letters[1], letters[1], letters[1:7]), 3)
#> [[1]] #> [1] "a" "a" "a" #> #> [[2]] #> [1] "b" "c" "d" #> #> [[3]] #> [1] "e" "f" "g" #>
# in this case ragged arrays will be created to keep the number of # unique elements consistent as possible between chunks chunk_grp_list(c(letters[1], letters[1], letters[1:7]), 3)
#> [[1]] #> [1] "a" "a" "a" "b" "c" #> #> [[2]] #> [1] "d" "e" #> #> [[3]] #> [1] "f" "g" #>