Difference between revisions of "Category:Use-Cases-and-Stories"

From Epos WiKi
Jump to: navigation, search
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Use cases and user stories introduction.....
 
  
== '''COMPLEX USER STORY FOR GUI''' ==
 
  
By a complex user story we are talking about Research Scientist (RS) who is searching for multiple types of data and want to visualize them together, compare similar data types, do its own analysis (do programming) and save the results for later use. Download either the whole dataset or just parts of it. A presentation of results/conclusions should be also part of the system.  
+
The two use cases below describe potential typical workflows that EPOS users might follow. These use cases were used to help design the [http://nodedev.bgs.ac.uk/epos/epos-gui/master/search GUI] of the EPOS web portal and its functionality. The information contained in the use cases was crucial in helping the developers to design a practical and useful environment. During the Requirement and Use Cases collection (January - March 2016) typical user stories were collected by the EPOS communications and development teams (WP6&7) from all the scientific communities (TCS WP8-WP17) involved in the EPOS project. Each community provided several Use Cases and described their User Stories. Examples of two User Stories which were derived from those collected are presented here.  
At the same time we want to activate all the architecture elements in this complex use case.  
 
  
'''Example'''
+
Further examples are provided on the [https://intranet.epos-ip.org/ EPOS intranet pages] and on the EPOS [https://github.com/epos-eu/documentation/ GitHub Technical Documentation pages].  
Event: A strong earthquake hit southern Italy near the Vesuvius volcano. As a scientist I want to get different datasets, display them and compare. I want to select subset of data that shows some specific trend and perform analysis on that subset. After that I want to use the results in another context and prepare figures for publication or web presentation. 
 
Initial hypothesis: There may be relations between large EQs affecting the local stress conditions and magma chamber underneath the volcano. Changes in stress can trigger volcanic activity.
 
Aims: Investigate possible relations between different data types, analyse such relations (statistical significance). Verify or reject the initial hypothesis and possibly come with new suggestions/conclusions.  
 
  
'''A. Discovery / search for relevant trends / correlations'''
+
The Simple User Story assumes that the user is searching for specific data related to his/her research area. Once the result is found it can be inspected (check details about the origin, etc.), visualized and downloaded afterwards.
1. Get overview - show me: (visual / spatio-temporal relations in maps)
 
1.1 Historical seismicity in that area (map within a bounding box) - earthquake catalogue (interactive map; filtering of events)
 
EXAMPLES: http://www.emsc-csem.org/Earthquake/?filter=yes
 
https://earthquake.usgs.gov/earthquakes/search/
 
WEBSERVICE: http://www.seismicportal.eu/fdsn-wsevent.html
 
1.2 Mapped faults + geology (map, WMS)
 
EXAMPLE: http://geo.ngu.no/kart/berggrunn/ (exists as WMS as well)
 
1.3 Previous volcanic activity (interactive map)
 
EXAMPLE: http://icelandicvolcanoes.is/
 
1.4 GNSS velocity field (map)
 
1.5 Plot those datasets (maps) together or in different pairs
 
 
o seismicity and distribution of lava flows
 
o seismicity and faults
 
o GNSS velocity map and faults
 
including various subsets of data (e.g. specific volcanic eruption in a given time and space and the associated seismicity in the same space and time window).
 
1.6 Save selected search results into my workspace
 
  
2. Investigate possible indicators of geodynamic activity (map and graphic visualisation of parametric data)
+
The Complex User Story is a counterpart of the Simple User Story. It introduces a complex scenario with data from many (possibly all) TCSs, and expects that the user will do all the analysis of all datasets within the web portal environment, including usage of HPC or HTC (via ICS-D).
2.1 Show positions of all measuring stations in map (interactive map) - those can be GPS/GNSS stations, seismic stations, monitored boreholes, dilatometers in field, water level gauges, etc.
 
2.2 Allow filtering for specific data types as mentioned above (using faceted search) (add selected stations to a basket/workspace for later processing)
 
    2.2.1 In-situ stress measurements (time series)
 
    2.2.2 Water level in surrounding boreholes (time series)
 
    2.2.3 Real time GPS/GNSS (time series)
 
    2.2.4 Amount of CO2 production in boreholes near volcano (time series)
 
2.2.5 Compare all time series in time-aligned plots and save figure.
 
2.3 Plot various combinations of subsets of data and do an agnostic data discovery, e.g. "Does CO2 production correlates with seismicity?" (this analysis can involve ICS-D for visualization and trend analysis)
 
2.4 Add selected datasets to my workspace
 
 
 
3. Download data or their subsets
 
3.1 Make a request (e.g. specify time window for waveform extraction)
 
3.2 The user will get response from the system on execution time for preparation of data
 
3.2 Confirm and download
 
 
'''B. Analysis'''
 
Using the selected subsets of data from various resources
 
4. Analyse the earthquake
 
4.1 Plot waveforms and check automatic phase onsets (process online; data download, catalogue record download)
 
EXAMPLE: https://quakelink.gempa.de/gaps/
 
4.2 Do corrections of phase onsets (plot waveforms)
 
4.3 Relocate earthquake (using different velocity models - 1D, 3D), magnitude estimate
 
4.3 MT inversion
 
4.3.1 Compare MT solution with historical MTs of EQs in that area
 
4.4 Do processing in any software (domestic or external) - ICS-D (HPC)
 
4.5 Analyse static stress transfer -> see if additional stress in magma chamber is significant (?)
 
5. Analyse co-seismic processes
 
5.1 Show InSAR images (map)
 
5.2 Show static displacement from GNSS after the earthquake (map)
 
5.3 Slip inversion - ICS-D (CES, modelling)
 
Compare, save figure.
 
 
 
'''C. Results and presentation of output from analysis for decisions'''
 
6. Interactive check points for validation of the hypothesis (i.e. summarise results from points 2, 4 and 5)
 
6.1 From point 2: "Is there any statistically significant correlation between any observations?"
 
6.2 From point 4:
 
6.2.1 "Is the volumetric (or non-DC) part of the moment tensor significant? Can it be related to magma intrusion?"
 
6.2.2 "Could be the additional stress caused by EQ in magma chamber significant?"
 
6.3 From point 5:
 
6.3.1 "Does the InSAR data show any movement (inflation/subsidence)?"
 
6.3.2 "Does the static displacement from GNSS data show any movement (inflation/subsidence)?"
 
6.3.3 "What is the slip distribution along the fault plane?"
 
 
 
'''D. Display and download results / conclusions / interpretations'''
 
7. Presenting the results for various end users (using specific web templates)
 
• for my own research publication
 
• for another research group
 
• for external use by different stake holders (e.g. public / governmental / emergency services / industry)
 
 
'''Some implications for GUI'''
 
1. Individual DDSS elements should have specific dedicated tools which allows their discovery / exploration (e.g. earthquake catalogue - map, filters allowing sorting/selecting by various catalogue parameters, 2D plots for statistical evaluation - various standardised tools)
 
o Ask WP IT Contacts (or DDSS IT Contacts) for standard tools commonly used within TCS associated to individual DDSS elements
 
2. How to combine similar type of DDSS elements and visualize them together (map overlay, aligned 2D plots, aligned time series)? What tools do we have (table below)?
 
3. To plot earthquake locations in map directly in discovery mode would require storage of EQ catalogue values at ICS level to avoid delays by requesting data via API to TCS level. How and where to store the data?
 
4. How the workspace/basket will be organized? Suggestions? Examples on web?
 
5. To set up a workflow for specific processing will require programmatic skills and tools allowing it. What tools do we know?
 
1. Jupyter tool which in connection with Enlighten (developed by CMR, Norway) can provide intuitive data exploration
 
2. (add more)
 

Latest revision as of 18:17, 19 July 2018


The two use cases below describe potential typical workflows that EPOS users might follow. These use cases were used to help design the GUI of the EPOS web portal and its functionality. The information contained in the use cases was crucial in helping the developers to design a practical and useful environment. During the Requirement and Use Cases collection (January - March 2016) typical user stories were collected by the EPOS communications and development teams (WP6&7) from all the scientific communities (TCS WP8-WP17) involved in the EPOS project. Each community provided several Use Cases and described their User Stories. Examples of two User Stories which were derived from those collected are presented here.

Further examples are provided on the EPOS intranet pages and on the EPOS GitHub Technical Documentation pages.

The Simple User Story assumes that the user is searching for specific data related to his/her research area. Once the result is found it can be inspected (check details about the origin, etc.), visualized and downloaded afterwards.

The Complex User Story is a counterpart of the Simple User Story. It introduces a complex scenario with data from many (possibly all) TCSs, and expects that the user will do all the analysis of all datasets within the web portal environment, including usage of HPC or HTC (via ICS-D).

Pages in category "Use-Cases-and-Stories"

The following 2 pages are in this category, out of 2 total.