Showing posts with label Framework. Show all posts
Showing posts with label Framework. Show all posts

Thursday, January 05, 2017

Creating AWS Machine Learning Models from ABAP

Hi guys,

extending my previous article about "Using AWS Machine Learning from ABAP to predict runtimes" I have now been able to extend the ABAP based API to create models from ABAP internal tables (which is like a collection of records, for the Non-ABAPers ;-).

This basically enables ABAP developers to utilize Machine Learning full cycle without ever having to leave their home turf or worry about the specifics of the AWS Machine Learning implementations.

My use case still is the same: Predicting runtimes of the SNP System Scan based on well known parameters like database vendor (e.g. Oracle, MaxDB), database size, SNP System Scan version and others. But since my first model was not quite meeting my expectations I wanted to be able to play around easily, adding and removing attributes from the model with a nice ABAP centric workflow. This probably makes it most effective for other ABAP developers to utilize Machine Learning. So let's take a look at the basic structure of the example program:

1:   REPORT /snp/aws01_ml_create_model.  
2:    
3:   START-OF-SELECTION.  
4:    PERFORM main.  
5:    
6:   FORM main.  
7:   *"--- DATA DEFINITION -------------------------------------------------  
8:    DATA: lr_scan_data TYPE REF TO data.  
9:    DATA: lr_prepared_data TYPE REF TO data.  
10:   DATA: lr_ml TYPE REF TO /snp/aws00_cl_ml.  
11:   DATA: lv_model_id TYPE string.  
12:   DATA: lr_ex TYPE REF TO cx_root.  
13:   DATA: lv_msg TYPE string.  
14:    
15:   FIELD-SYMBOLS: <lt_data> TYPE table.  
16:    
17:  *"--- PROCESSING LOGIC ------------------------------------------------  
18:   TRY.  
19:     "fetch the data into an internal table  
20:     PERFORM get_system_scan_data CHANGING lr_scan_data.  
21:     ASSIGN lr_scan_data->* TO <lt_data>.  
22:    
23:     "prepare data (e.g. convert, select features)  
24:     PERFORM prepare_data USING <lt_data> CHANGING lr_prepared_data.  
25:     ASSIGN lr_prepared_data->* TO <lt_data>.  
26:    
27:     "create a model  
28:     CREATE OBJECT lr_ml.  
29:     PERFORM create_model USING lr_ml <lt_data> CHANGING lv_model_id.  
30:    
31:     "check if...  
32:     IF lr_ml->is_ready( lv_model_id ) = abap_true.  
33:    
34:      "...creation was successful  
35:      lv_msg = /snp/cn00_cl_string_utils=>text( iv_text = 'Model &1 is ready' iv_1 = lv_model_id ).  
36:      MESSAGE lv_msg TYPE 'S'.  
37:    
38:     ELSEIF lr_ml->is_failed( lv_model_id ) = abap_true.  
39:    
40:      "...creation failed  
41:      lv_msg = /snp/cn00_cl_string_utils=>text( iv_text = 'Model &1 has failed' iv_1 = lv_model_id ).  
42:      MESSAGE lv_msg TYPE 'S' DISPLAY LIKE 'E'.  
43:    
44:     ENDIF.  
45:    
46:    CATCH cx_root INTO lr_ex.  
47:    
48:     "output errors  
49:     lv_msg = lr_ex->get_text( ).  
50:     PERFORM display_lines USING lv_msg.  
51:    
52:   ENDTRY.  
53:    
54:  ENDFORM.  

And now let's break it down into it's individual parts:

Fetch Data into an Internal Table

In my particular case I was fetching the data via a REST Service from the SNP Data Cockpit instance I am using to keep statistics on all executed SNP System Scans. However, you can basically fetch your data that will be used as a data source for your model in any way that you like. Most probably you will be using OpenSQL SELECTs to fetch the data accordingly. Resulting data looks somewhat like this:

Prepare Data

This is the raw data and it's not perfect! The data quality is not quite good and in the shape that it's in. According to this article there are some improvements that I need to do in order to improve its quality.
  • Normalizing values (e.g. lower casing, mapping values or clustering values). E.g.
    • Combining the database vendor and the major version of the database because those two values only make sense when treated in combination and not individually
    • Clustering the database size to 1.5TB chunks as these values can be guessed easier when executing predictions
    • Clustering the runtime into exponentially increasing categories (although this may also hurt accuracy...)
  • Filling up empty values with reasonable defaults. E.g.
    • treating all unknown SAP client types as test clients
  • Make values and field names more human readable. This is not necessary for the machine learning algorithms, but it makes for better manual result interpretation
  • Removing fields that do not make good features, like 
    • IDs
    • fields that cannot be provided for later predictions, because values cannot be determined easily or intuitively
  • Remove records that still do not have good data quality. E.g. missing values in
    • database vendors
    • SAP system types
    • customer industry
  • Remove records that are not representative. E.g. 
    • they refer to scans with exceptionally short runtimes probably due to intentionally limiting the scope
    • small database sizes that are probably due to non productive systems
1:   FORM prepare_data USING it_data TYPE table CHANGING rr_data TYPE REF TO data.  
2:   *"--- DATA DEFINITION -------------------------------------------------  
3:    DATA: lr_q TYPE REF TO /snp/cn01_cl_itab_query.  
4:    
5:   *"--- PROCESSING LOGIC ------------------------------------------------  
6:    CREATE OBJECT lr_q.  
7:    
8:    "selecting the fields that make good features  
9:    lr_q->select( iv_field = 'COMP_VERSION'      iv_alias = 'SAP_SYSTEM_TYPE' ).  
10:   lr_q->select( iv_field = 'DATABASE'          iv_uses_fields = 'NAME,VERSION' iv_cb_program = sy-repid iv_cb_form = 'ON_VIRTUAL_FIELD' ).  
11:   lr_q->select( iv_field = 'DATABASE_SIZE'     iv_uses_fields = 'DB_USED' iv_cb_program = sy-repid iv_cb_form = 'ON_VIRTUAL_FIELD' ).  
12:   lr_q->select( iv_field = 'OS'                iv_alias = 'OPERATING_SYSTEM' ).  
13:   lr_q->select( iv_field = 'SAP_CLIENT_TYPE'   iv_uses_fields = 'CCCATEGORY' iv_cb_program = sy-repid iv_cb_form = 'ON_VIRTUAL_FIELD' ).  
14:   lr_q->select( iv_field = 'COMPANY_INDUSTRY1' iv_alias = 'INDUSTRY' ).  
15:   lr_q->select( iv_field = 'IS_UNICODE'        iv_cb_program = sy-repid iv_cb_form = 'ON_VIRTUAL_FIELD' ).  
16:   lr_q->select( iv_field = 'SCAN_VERSION' ).  
17:   lr_q->select( iv_field = 'RUNTIME'           iv_uses_fields = 'RUNTIME_HOURS' iv_cb_program = sy-repid iv_cb_form = 'ON_VIRTUAL_FIELD' ).  
18:    
19:   "perform the query on the defined internal table  
20:   lr_q->from( it_data ).  
21:    
22:   "filter records that are not good for results  
23:   lr_q->filter( iv_field = 'DATABASE'         iv_filter = '-' ). "no empty values in the database  
24:   lr_q->filter( iv_field = 'SAP_SYSTEM_TYPE'  iv_filter = '-' ). "no empty values in the SAP System Type  
25:   lr_q->filter( iv_field = 'INDUSTRY'         iv_filter = '-' ). "no empty values in the Industry  
26:   lr_q->filter( iv_field = 'RUNTIME_MINUTES'  iv_filter = '>=10' ). "Minimum of 10 minutes runtime  
27:   lr_q->filter( iv_field = 'DATABASE_GB_SIZE' iv_filter = '>=50' ). "Minimum of 50 GB database size  
28:    
29:   "sort by runtime  
30:   lr_q->sort( 'RUNTIME_MINUTES' ).  
31:    
32:   "execute the query  
33:   rr_data = lr_q->run( ).  
34:    
35:  ENDFORM.  

Basically the magic is done using the SNP/CN01_CL_ITAB_QUERY class, which is part of the SNP Transformation Backbone framework. It enables SQL like query capabilities on ABAP internal tables. This includes transforming field values, which is done using callback mechanisms.


1:   FORM on_virtual_field USING iv_field is_record TYPE any CHANGING cv_value TYPE any.  
2:   
3:    "...  
4:    
5:    CASE iv_field.  
6:     WHEN 'DATABASE'.  
7:    
8:      "combine database name and major version to one value  
9:      mac_get_field 'NAME' lv_database.  
10:     mac_get_field 'VERSION' lv_database_version.  
11:     SPLIT lv_database_version AT '.' INTO lv_database_version lv_tmp.  
12:     CONCATENATE lv_database lv_database_version INTO cv_value SEPARATED BY space.  
13:    
14:    WHEN 'DATABASE_SIZE'.  
15:    
16:     "categorize the database size into 1.5 TB chunks (e.g. "up to 4.5 TB")  
17:     mac_get_field 'DB_USED' cv_value.  
18:     lv_p = ( floor( cv_value / 1500 ) + 1 ) * '1.5'. "simple round to full 1.5TB chunks  
19:     cv_value = /snp/cn00_cl_string_utils=>text( iv_text = 'up to &1 TB' iv_1 = lv_p ).  
20:     TRANSLATE cv_value USING ',.'. "translate commas to dots to the CSV does not get confused  
21:    
22:    WHEN 'SAP_CLIENT_TYPE'.  
23:    
24:     "fill up the client category type with a default value  
25:     mac_get_field 'CCCATEGORY' cv_value.  
26:     IF cv_value IS INITIAL.  
27:      cv_value = 'T'. "default to (T)est SAP client  
28:     ENDIF.  
29:    
30:    WHEN 'IS_UNICODE'.  
31:    
32:     "convert the unicode flag into more human readable values  
33:     IF cv_value = abap_true.  
34:      cv_value = 'unicode'.  
35:     ELSE.  
36:      cv_value = 'non-unicode'.  
37:     ENDIF.  
38:    
39:    WHEN 'RUNTIME'.  
40:    
41:     "categorize the runtime into human readable chunks  
42:     mac_get_field 'RUNTIME_HOURS' lv_int.  
43:     IF lv_int <= 1.  
44:      cv_value = 'up to 1 hour'.  
45:     ELSEIF lv_int <= 2.  
46:      cv_value = 'up to 2 hours'.  
47:     ELSEIF lv_int <= 3.  
48:      cv_value = 'up to 3 hours'.  
49:     ELSEIF lv_int <= 4.  
50:      cv_value = 'up to 4 hours'.  
51:     ELSEIF lv_int <= 5.  
52:      cv_value = 'up to 5 hours'.  
53:     ELSEIF lv_int <= 6.  
54:      cv_value = 'up to 6 hours'.  
55:     ELSEIF lv_int <= 12.  
56:      cv_value = 'up to 12 hours'.  
57:     ELSEIF lv_int <= 24.  
58:      cv_value = 'up to 1 day'.  
59:     ELSEIF lv_int <= 48.  
60:      cv_value = 'up to 2 days'.  
61:     ELSEIF lv_int <= 72.  
62:      cv_value = 'up to 3 days'.  
63:     ELSE.  
64:      cv_value = 'more than 3 days'.  
65:     ENDIF.  
66:    
67:   ENDCASE.  
68:    
69:  ENDFORM.  

After running all those preparations, the data is transformed into a record set that looks like this:


Create a Model

Ok, preparing data for a model is something that the developer has to do for each individual problem he wants to solve. But I guess this is done better if performed in a well known environment. After all this is the whole purpose of the ABAP API. Now we get to the parts that's easy, as creating the model based on the internal table we have prepared so far is fully automated. As a developer you are completely relieved from the following tasks:

  • Converting the internal table into CSV
  • Uploading it into an AWS S3 bucket and assigning the correct priviledges, so it can be used for machine learning
  • Creating a data source based on the just uploaded AWS S3 object and providing the input schema (e.g. which fields are category fields, which ones are numeric etc.). As this information can automatically be derived from DDIC information
  • Creating a model from the datasource
  • Training the model
  • Creating an URL Endpoint so the model can be used for predictions as seen in the previous article.
That's quite a lot of stuff, that you do not need to do anymore. Doing all this is just one API call away:

1:   FORM create_model USING ir_aws_machine_learning TYPE REF TO /snp/aws00_cl_ml  
2:                           it_table TYPE table  
3:                  CHANGING rv_model_id.  
4:    
5:     rv_model_id = ir_aws_machine_learning->create_model(  
6:    
7:     "...by creating a CSV file from an internal table  
8:     "  and upload it to AWS S3, so it can be used  
9:     "  as a machine learning data source  
10:    it_table = it_table  
11:    
12:    "...by defining a target field that is used  
13:    iv_target_field = 'RUNTIME'  
14:    
15:    "...(optional) by defining a title  
16:    iv_title = 'Model for SNP System Scan Runtimes'  
17:    
18:    "...(optional) to create an endpoint, so the model  
19:    "  can be used for predictions. This defaults to  
20:    "  true, but you may want to switch it off  
21:    
22:    " IV_CREATE_ENDPOINT = ABAP_FALSE  
23:    
24:    "...(optional) by defining fields that should be  
25:    "  treated as text rather than as a category.  
26:    "  By default all character based fields are treated  
27:    "  as categorical fields  
28:    
29:    " IV_TEXT_FIELDS = 'COMMA,SEPARATED,LIST,OF,FIELDNAMES'  
30:    
31:    "...(optional) by defining fields that should be  
32:    "  treated as numerical fields rather than categorical  
33:    "  fields. By detault the type will be derived from the  
34:    "  underlying data type, but for convenience reasons  
35:    "  you may want to use this instead of creating and  
36:    "  filling a completely new structure  
37:    
38:    " IV_NUMERIC_FIELDS = 'COMMA,SEPARATED,LIST,OF,FIELDNAMES'  
39:    
40:    "...(optional) by defining if you want to create the model  
41:    "  synchronously or asynchronously. By default a the  
42:    "  datasource, model, evaluation and endpoint are created  
43:    "  synchronously so that after returning from the method call  
44:    "  you can immediately start with predictions.  
45:    
46:    " IV_WAIT = ABAP_TRUE by default  
47:    " IV_SHOW_PROGRESS = ABAP_TRUE by default  
48:    " IV_REFRESH_RATE_IN_SECS = 5 seconds by default  
49:    
50:   ).  
51:    
52:  ENDFORM.  

As you see, most stuff is optional. Sane default values are provided that assume synchronously uploading the data, creating the datasource, model, training and endpoint. So you can directly perform predictions afterwards. Creating all of this in an asynchronous fashion is also possible. Just in case you do not rely on performing predictions directly. After all, the whole process takes up 10 to 15 minutes - which is why showing progress becomes important, especially since you do not want to run into time out situations, when doing this in online mode with a GUI connected.

The Result

After all is done, you can perform predictions. Right let's just hop over into AWS machine learning console and see the results:

A CSV file was created in an AWS S3 bucket...


...then a datasource, ML model and an evaluation for training the model were created (also an endpoint, but the screenshot does not show it) ...


...and finally we can inspect the model performance.

Conclusion

This is a big step towards making Machine Learning available to many without the explicit need to cope with vendor specific aspects. However understanding the principles of machine learning, especially in regards to the problems, you can apply it to and what good data quality means for good predictions is a requirement.

Sunday, January 01, 2017

Using AWS Machine Learning from ABAP to predict runtimes

Happy new year everybody!

Today I tried out Amazon's Machine Learning capabilities. After running over the basic AWS Machine Learning tutorial and getting to know how the guys at AWS deal with the subject I got quite exited.



Everythings sounds quite easy:

  1. Prepare example data in a single CSV file with good and distinct features for test and training purposes
  2. Create a data source from that CSV file, which basically means verifying that the column types were detected correctly and specifying a result column. 
  3. Create a Machine Learning model from the data source, running an evaluation on it
  4. Create an Endpoint, so your model becomes consumable via a URL based service

My example use case was to predict the runtime of one of our analysis tools - SNP System Scan - given some system parameters. In general any software will probably benefit from good runtime predictions as this is a good way to improve the user experience. We all know the infamous progress bar metaphor that quickly reaches 80% but then takes ages to get to 100%. As a human being I expect progress to be more... linear ;-)


So this seems like a perfect starting point for exploring Machine Learning. I got my data perpared and ran through all the above steps. I was dealing with numerical and categorical columns with my datasource but also boolean and text are available. Text is good for unstructured data such as natural language analysis, but I did not get into that yet. Everything so far was quite easy and went well.

Now I needed to incorporate the results into the software, which is in ABAP. Hmmm, no SDK for ABAP. Figured! But I still want to enable all my colleagues to take advantage of this new buzzword techology and play around with it. I decided for a quick implementation using the proxy pattern.


So I have created an ABAP based API that calls a PHP based REST Service via HTTP, which then utilizes the PHP SDK for AWS to talk to the AWS Machine Learning Endpoint I previously created.

For the ABAP part I wanted to be both as easy and as generic as possible, so the API should work with any ML model and any record structure. The way that ABAP application developers would interact with this API would look like this:


REPORT  /snp/aws01_ml_predict_scan_rt.

PARAMETERS: p_comp TYPE string LOWER CASE OBLIGATORY DEFAULT 'SAP ECC 6.0'.
PARAMETERS: p_rel TYPE string LOWER CASE OBLIGATORY DEFAULT '731'.
PARAMETERS: p_os TYPE string LOWER CASE OBLIGATORY DEFAULT 'HP-UX'.
PARAMETERS: p_db TYPE string LOWER CASE OBLIGATORY DEFAULT 'ORACLE 12'.
PARAMETERS: p_db_gb TYPE i OBLIGATORY DEFAULT '5000'. "5 TB System
PARAMETERS: p_uc TYPE c AS CHECKBOX DEFAULT 'X'. "Is this a unicode system?
PARAMETERS: p_ind TYPE string LOWER CASE OBLIGATORY DEFAULT 'Retail'. "Industry
PARAMETERS: p_svers TYPE string LOWER CASE OBLIGATORY DEFAULT '16.01'. "Scan Version

START-OF-SELECTION.
  PERFORM main.

FORM main.
*"--- DATA DEFINITION -------------------------------------------------
  "Definition of the record, based on which a runtime predition is to be made
  TYPES: BEGIN OF l_str_system,
          comp_version TYPE string,
          release TYPE string,
          os TYPE string,
          db TYPE string,
          db_used TYPE string,
          is_unicode TYPE c,
          company_industry1 TYPE string,
          scan_version TYPE string,
         END OF l_str_system.

  "AWS Machine Learning API Class
  DATA: lr_ml TYPE REF TO /snp/aws00_cl_ml.
  DATA: ls_system TYPE l_str_system.
  DATA: lv_runtime_in_mins TYPE i.
  DATA: lv_msg TYPE string.
  DATA: lr_ex TYPE REF TO cx_root.

*"--- PROCESSING LOGIC ------------------------------------------------
  TRY.
      CREATE OBJECT lr_ml.

      "set parameters
      ls_system-comp_version = p_comp.
      ls_system-release = p_rel.
      ls_system-os = p_os.
      ls_system-db = p_db.
      ls_system-db_used = p_db_gb.
      ls_system-is_unicode = p_uc.
      ls_system-company_industry1 = p_ind.
      ls_system-scan_version = p_svers.

      "execute prediction
      lr_ml->predict(
        EXPORTING
          iv_model   = 'ml-BtUpHOFhbQd' "model name previously trained in AWS
          is_record  = ls_system
        IMPORTING
          ev_result  = lv_runtime_in_mins
      ).

      "output results
      lv_msg = /snp/cn00_cl_string_utils=>text( iv_text = 'Estimated runtime of &1 minutes' iv_1 = lv_runtime_in_mins ).
      MESSAGE lv_msg TYPE 'S'.

    CATCH cx_root INTO lr_ex.

      "output errors
      lv_msg = lr_ex->get_text( ).
      PERFORM display_lines USING lv_msg.

  ENDTRY.

ENDFORM.

FORM display_lines USING iv_multiline_test.
*"--- DATA DEFINITION -------------------------------------------------
  DATA: lt_lines TYPE stringtab.
  DATA: lv_line TYPE string.

*"--- PROCESSING LOGIC ------------------------------------------------
  "split into multiple lines...
  SPLIT iv_multiline_test AT cl_abap_char_utilities=>newline INTO TABLE lt_lines.
  LOOP AT lt_lines INTO lv_line.
    WRITE: / lv_line. "...and output each line individually
  ENDLOOP.

ENDFORM.

Now on the PHP side I simply used the AWS SDK for PHP. Setting it up is as easy as extracting a ZIP file, require the auto-load mechanism and just use the API. I wrote a little wrapper class that I could easily expose as a REST Service (not shown here).

<?php

class SnpAwsMachineLearningApi {

   /**
   * Create an AWS ML Client Object
   */
   private function getClient($key,$secret) {
      return new Aws\MachineLearning\MachineLearningClient([
         'version' => 'latest',
         'region'  => 'us-east-1',
         'credentials' => [
            'key'    => $key,
            'secret' => $secret
         ],
      ]);
   }

   /**
   * Determine the URL of the Model Endpoint automatically
   */
   private function getEndpointUrl($model,$key,$secret) {

      //fetch metadata of the model
      $modelData = $this->getClient($key,$secret)->getMLModel([
         'MLModelId'=>$model,
         'Verbose'=>false
      ]);

      //check if model exists
      if(empty($modelData)) {
         throw new Exception("model ".$model." does not exist");
      }

      //getting the endpoint info
      $endpoint = $modelData['EndpointInfo'];

      //check if endpoint was created
      if(empty($endpoint)) {
         throw new Exception("no endpoint exists");
      }

      //check if endpoint is ready
      if($endpoint['EndpointStatus'] != 'READY') {
         throw new Exception("endpoint is not ready");
      }

      //return the endpoint url
      return $endpoint['EndpointUrl'];
   }

   /**
   * Execute a prediction
   */
   public function predict($model,$record,$key,$secret) {
      return $this->getClient($key,$secret)->predict(array(

          //provide the model name
         'MLModelId'       => $model,

         //make sure it's an associative array that is passed as the record
         'Record'          => json_decode(json_encode($record),true),

         //determine the URL of the endpoint automatically, assuming there is
         //only and exactely one
         'PredictEndpoint' => $this->getEndpointUrl($model,$key,$secret)
      ));
   }

}

And that is basically it. Of course for the future it would be great to get rid of the PHP part and have an SDK implementation purely ABAP based but again, this was supposed to be a quick and easy implementation.

Currently it enables ABAP developers to execute predictions on AWS Machine Learning Platform on any trained model without having to leave their terrain.

In the future this could be extended to initially providing or updating datasources from ABAP internal tables, creating and training models on the fly and of course abstracting stuff even so far, that other Machine Learning providers can be plugged in. So why not explore the native SAP HANA capabilities next...

Wednesday, November 09, 2016

Google Slides API may power my future Slide Applications

So now Google is publishing its Slides API for programmatic use. This opens up a whole new world of slide generation.



This is especially interesting because I have previously been building my own REST Services for a Slide Service that we use for our own SaaS products at SNP Schneider-Neureither & Partner AG - such as the SNP System Scan. Results may look like this and a fully generated using a home-grown REST Service API.



open fullscreen in new window

As far as the REST API is concerned it takes some slide definition in JSON fomat that looks something like this:


{
   "title":"Fun with auto generated Slide Show",
   "author":"Dominik Wittenbeck",
   "subtitle":"",
   "header":"SNP Slideshow",
   "footer":"
",
   "slides":[
      {
         "id":"005056BF5BE41EE6A9D91E8EC1102DD3",
         "title":"First Slide with some HTML",
         "topline":"SNP Slides Showcase",
         "html":[
            "..."
         ]
      },
      {
         "id":"005056BF5BE41EE6A9D91E8EC1104DD3",
         "title":"Second Slide with child slides",
         "topline":"SNP Slides Showcase",
         "items":[
            {
               "id":"005056BF5BE41EE6A9D91E8EC1106DD3",
               "title":"Second Slide with child slides",
               "topline":"SNP Slides Showcase",
               "html":[
                  "..."
               ]
            },
            {
               "id":"005056BF5BE41EE6A9D91E8EC1108DD3",
               "title":"Child Slide 2",
               "topline":"SNP Slides Showcase",
               "html":[
                  "..."
               ]
            }
         ]
      },
      {
         "id":"005056BF5BE41EE6A9D91E8EC110ADD3",
         "title":"Third Slide with a Chart",
         "topline":"SNP Slides Showcase",
         "layout":"vertical",
         "html":[
            "...",
            "..."
         ]
      }
   ]
}


Besides the REST API I have build additional higher level APIs, that can used e.g. directly in ABAP.

REPORT zdwi_generate_slides_test.
*"--- DATA DEFINITION -------------------------------------------------
TYPE-POOLSabap.

*"--- PROCESSING LOGIC ------------------------------------------------
START-OF-SELECTION.
  PERFORM main.

FORM main.
*"--- DATA DEFINITION -------------------------------------------------
  DATAlr_deck TYPE REF TO /snp/cn02_cl_slidedeck.
  DATAlr_slide TYPE REF TO /snp/cn02_cl_slide.
  DATAlr_sub_slide TYPE REF TO /snp/cn02_cl_slide.
  DATAlr_chart TYPE REF TO /snp/cn02_cl_slide_chart.
  DATAlv_id TYPE string.
  DATAlt_t000 TYPE TABLE OF t000.
  DATAlv_layout TYPE string.
  DATAlv_html TYPE string.
  DATAlv_url TYPE string.

*"--- PROCESSING LOGIC ------------------------------------------------
  lv_id /snp/cn00_cl_string_utils=>uuid).

  "Generate the slidedeck
  lr_deck /snp/cn02_cl_slidedeck=>create(
    iv_id lv_id
    iv_title 'Fun with auto generated Slide Show'
    iv_author 'Dominik Wittenbeck'
    iv_header 'SNP Slideshow'
    iv_footer '<br/>'
  ).

  "--- Add first slide with some HTML Content
  lr_slide /snp/cn02_cl_slide=>create(
    iv_title 'First Slide with some HTML'
    iv_topline 'SNP Slides Showcase'
  ).

  CONCATENATE
    '<h1>' 'Headline' '</h1>'
    '<p>'
      'Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam'
      'nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam'
      'erat, sed diam voluptua. At vero eos et accusam et justo duo dolores'
      'et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est'
      'Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet, consetetur'
      'sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore'
      'et dolore magna aliquyam erat, sed diam voluptua. At vero eos'
      'et accusam et justo duo dolores et ea rebum. Stet clita kasd'
      'gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.'
    '</p>'
  INTO lv_html SEPARATED BY space.
  lr_slide->add_htmllv_html ).
  lr_deck->add_slidelr_slide ).


  "--- Create a second Slide with child slides
  lr_slide /snp/cn02_cl_slide=>create(
    iv_title 'Second Slide with child slides'
    iv_topline 'SNP Slides Showcase'
  ).

  "...with one child slide...
  lr_sub_slide /snp/cn02_cl_slide=>create(
    iv_title 'Second Slide with child slides'
    iv_topline 'SNP Slides Showcase'
  ).

  CONCATENATE
    '<p>'
      'Check out the arrows on the lower right, this slide has another child slide'
    '</p>'
  INTO lv_html.

  lr_sub_slide->add_htmllv_html ).
  lr_slide->add_slidelr_sub_slide ).

  "...and a second child slide...
  lr_sub_slide /snp/cn02_cl_slide=>create(
    iv_title 'Child Slide 2'
    iv_topline 'SNP Slides Showcase'
  ).

  lr_sub_slide->add_html'Content of child slide 2' ).
  lr_slide->add_slidelr_sub_slide ).

  "...oh, and don't forget to add the main slide to the deck ;-)
  lr_deck->add_slidelr_slide ).


  "--- On the 3rd Slide letzt incorporate some data
  "Let's just fetch basic information about all clients...
  SELECT FROM t000 INTO TABLE lt_t000.

  "also split that slide into several parts using a layout
  lr_slide /snp/cn02_cl_slide=>create(
    iv_title 'Third Slide with a Chart'
    iv_topline 'SNP Slides Showcase'
    iv_layout 'vertical'
  ).

  "...and put that data in a bar chart in the
  " first part of the layout (=left side)
  lr_chart /snp/cn02_cl_slide_chart=>create_bar).
  lr_chart->set_data(
    it_data lt_t000
    iv_x_columns 'ORT01' "Show number of clients per location
  ).
  lr_slide->add_chartlr_chart ).

  "...and put some descriptive text to the second part of
  " the layout (=right side)
  CONCATENATE
    '<p>'
      'This is some descriptive text for the chart'
    '</p>'
    '<ul>'
      '<li>' 'and while' '</li>'
      '<li>' 'we''re at it, let''s' '</li>'
      '<li>' 'have a few bullet points' '</li>'
    '</ul>'
  INTO lv_html SEPARATED BY space.

  lr_slide->add_htmllv_html ).


  "...oh, and don't forget to add the main slide to the
  " deck... again  ;-)
  lr_deck->add_slidelr_slide ).

  "Publish the slide deck via the REST Service and Report
  " back the URL that would show it in a browser
  lv_url lr_deck->get_url).
  WRITE/ lv_url.

ENDFORM.

So with the newly published Google Slides API maybe I could take this one step further....

Friday, October 21, 2016

MobX tutorials - MobX + React is AWESOME

Not that I have tried it yet, but MobX looks like a straight forward alternative, especially for developers, that used to work with classical Model-Classes for a long time. It just seems so familiar.



Thursday, August 10, 2006

Republished CSTL

I have just republished the article and samples about CSTL (Client Side Tag Libs), which brings TagLibs to JavaScript in an extensible object oriented model. The original articles are well 1 1/2 year old but since everyone seems to talk about AJAX these days, I just want to show, that I had my 50cent to distribute to that subject a long time ago. Besides the links were not working in quite a while.

full article download files

Sunday, January 30, 2005

Featured Tutorial Series - Client Side Tag Libs

Client Side Tag Libraries (CSTL) are a JavaScript based infrastructure that enables web developers and designers to employ custom tags into a (X)HTML page. Custom tags are mainly used to have easy access to sophisticated UI components. Underneath CSTLs are based on JavaScript classes that provide object oriented features like inheritance, polymorphism etc. and on top of that are easily distributable and redistributable across even across domains.

A TagLib enhanced (X)HTML document needs to loads the so called TagLibProcessor, as an external JavaScript file. The processor then looks for TagLibs registered in the <head> section of the document. After loading the TagLibs, the DOM is traversed finding and processing all custom tags – which should be XML namespaced.

A custom tag brings with it a convenient manner to create UI components in a JavaScript class, by either direct and eased DOM modification or even easier (X)HTML generation. Also you can bind variables to custom tag attributes that may refer to JavaScript variables, objects and functions, which are resolved into their real values behind on demand. Also every tag can be independently refreshed, either on a timed basis or in reaction to events, always serving you the current variable situation. This is the base for great flicker free user experiences.

<html>
  <head>
     <script type="text/javascript" src="cstl.js"></script>
     <cstl:taglib classPath="com.inspirationlabs.taglib.std.StdTagLib" ns="std"/>
     <script type="text/javascript">
        function date() {
           return new Date();
        }
     </script>
  </head>
  <body>
     <std:print var="${date}" refreshRate="1000">
        N/A
     </std:print>
  </body>
</html>

full article download files

Saturday, November 27, 2004

X-Desktop

Just to have another system of the open-source community opposing the closed source solution just presented I have to mention X-Desktop. X-Desktop aims to bring the a skinnable windowing metaphor to the web. And they do that pretty well. The project might not have evolved much since it has been orginially released (about 2 years ago). It's comprised of a few JavaScript files which provide an API to open, close and arrage skinned inline windows on any website.



You probably find some bugs that are worth fixing but basically it's a very stable software all in all. I have started using it, when it first came out and was still published under a non-commercial license. Those guys do not have it so much with licensing and I guess it's still not perfectly clear, how you can utilize the API in commecial projects.

Bindows.net

Bindows.net is a client sided rich client application framework, that uses petty much CrossBrowser JavaScript to do really impressive things. It aims to enable developers to code OS-style applications using just XML and deliver them via the browser.



You can use XML to layout your application, and use JavaScript to implement dynamic behaviour for components in their own component model. Components can be embedded pretty easily into existing web site. It's a truely exiting technology, and since x-browser support is available for Internet Explorer and Mozilla I guess the reach of products built with Bindows is pretty great, too.



Although Macromedia Flash can probably deliver better user experiences, Bindows is probably better suited to provide OS-type/styled applications through the web browser.



I have not played around with Bindows too much, but the examples I see in the AppLauncher a very promising. On the downside this is no open-source technology as you need to aquire licenses if you want to use this stuff in production with a commercial project.



I originally came to notice Bindows though the creators original DHTML-website, WebFX, which provides a lot of cross-browser (and also browser specific) widgets components that are written in XHTML and JavaScript. Nice things such a menus, sortable tables, tabbed browsing etc. are available for free here. Not quite as integrated as Bindows. But this stuff got me addicted to, when I used to be a web design script kiddie ;-)