Hand Carry Data Collecting Through Questionnaire and Quiz Alike Using Mini-computer Raspberry Pi30/5/2020
Hand-Carry-Data-Collecting-Through-Questionnaire-and-Quiz-Alike-on-Crowds-using-Mini-Computer-Raspberry-Pi-Slide from Fajar Purnama
Note
AbstractConventionally data collecting through surveys or quizzes are usually done by distributing hard paper based questionnaires or by directly asking people themselves. With the invention of the Internet, the base of these methods changes to online. For example, in a high developed information communication technology (ICT) University, the authorized personnels sends emails to students to complete an online questionnaire resided on a certain website. However, in most cases on developing countries such as the ones resided in South East Asia, the people are already familiar with computer devices such as gadgets, laptops, netbooks, etc, but they do not have a reliable Internet connection. Therefore this work proposes a method which utilizes this situation that can improve the convenience of survey process for both the surveyors and participants. Since most people have gadgets, our method involves in providing a portable hotspot device for them to connect and access our local survey questionnaire website. This is possible thanks to the invention of credit card size computer such as Raspberry Pi. Like any other computer it can be filled with an operating system (OS), installed with a hotspot module and a webserver which are enough to conduct surveys or quizzes alike through wireless local area network (WLAN) except that the size is hand carry which is easier to carry than laptop. In this work the method is realized and was put to few trials. This research is more of mobile on surveyors’ or teachers’ side than mobile learning on students’ side. IntroductionThere are many forms of data collecting, for example questionnaires which their results are used to create statistical analysis like finding the students’ and teachers’ perspective of elearning like in some of our peers’ research Paturusi (2015) and Monmonthe (2016) where they are needed to determine the e-readiness in their respective researched Universities. In classrooms, quizzes are more used to access the knowledge of the students of which parts of the subject that were clearly understood and which parts were not. Quizzes also have other benefits like stimulating the learning process of the students which can guide them in learning the subject and help them in performing better in exams as discussed on McDaniel (2012) where they performed experiments on different type of quizzing such as repetitive quizzing with item identical to exams with only related to exams, and providing feedbacks after quizzes. Both questionnaires and quizzes serves as a purpose for information gathering. Unfortunately, these are not what wanted to be discuss here. What wanted to be discussed is the method or process of conducting the data collecting or survey. The methods of our peers were still quite conventional, by distributing paper questionnaires and recollecting them back, while the others uses online method that utilizes computer and Internet connection which is currently one of the easiest way. However in most developing countries such as in South East Asia, Internet connection is not well established The World Bank Group (2016), meaning that online survey is not the answer, like in Indonesia for example Kusumo (2012) which forces our peers to use the conventional method. However most people there are familiar and owns computer devices such as gadgets, androids, and iphones The World Bank Group (2016) and this research tries to utilize that situation which aims to be more convenient than the conventional method. Since computers are utilized the method will also have the advantage of online survey which is the convenience of having automated data collection Wright (2005). This topic can be said more of mobile on the surveyors’ or teachers’ side than the typical mobile learning on the students’ side. The method proposed is to use a portable server where the users’ computer devices can connect to and perform the survey there. The data obtained will be stored on that miniserver and later be extracted by the surveyors with ease, also it is possible to program a preprocessing on that miniserver which can make things more easier. This idea can easily be realized since the invention of a credit card size computer Raspberry Pi (there are other brands as well but for now this one is used). All that is needed is to program this Raspberry Pi by inserting an OS, installing a hotspot module where the users will connect through WLAN, and a local website for the survey materials itself. After this idea was realized, a small trial was conducted on few users. More importantly the advantages of this method was shown and discussed, on the other hand also the limitations of this method based on resource consumptions. Related WorkThere are other researches that had similar situation to this one where people have there own computer device but insufficient infrastructure in their respective places to connect to The Internet. Most of these researches shows making things portable as the answer. Here are some related works:
Materials and MethodsDeviceThe device used is a hand carry or a minicomputer which functions as a portable server. Table 1 shows the modules needed to execute the method on the next section and Table 2 is the specification of the minicomputer. Nowadays the price of Raspberry Pi ranges from $30 - $50. If not already owned items to configure the Raspberry Pi the following items can be purchased; high definition multimedia interface (HDMI) compatible display starting from $20, keyboard beginning at $5, mouse as cheap as $1, and power bank from $10.
MethodThis work is designed to give convenience and mobility to the surveyors and teachers alike to do their desired task which for now limited to only getting responses from others, for examples conducting quizzes to assess the students’ knowledge, and surveying the crowds to know their perspective. With the situation of limited Internet connection, the modern online survey is unusable, but with many ownership of computer devices, an easier way than the conventional paper based questionnaire becomes available. That method is the use of hand carry computer which functions as a portable server to gather data inputed from other users’ or participants computer device which can be connected and functions as a client illustrated on Figure 1. When conducting surveys, it is no longer needed to handover paper questionnaires, but only ask the people to connect to the device and answer the questions from their gadgets. It can be applied by surveyors to gather data on institutions, teachers who are giving quizzes to their students, surveyors who gather data from home to homes, or even by random persons on crowds in the public whether for commercial or personal use. Unlike the paper based, processing can be task on the device which eliminates the needs to manually inputing and process the survey data afterwards which also means results can be obtained instantly and accumulatively. As described in the previous sub section, the hand carry device used is a Raspberry Pi. Raspbian OS is then flashed into this computer which is a Linux based OS. Required modules can be downloaded and installed from The Internet which the Raspberry Pi can connect from the wired or wireless interface. The first modules needed are means to connect users to this Raspberry Pi through wireless connection which will use the one based on IEEE 802.11. They are Hostapd to run the wireless interface as a hotspot and Udhcpd to give IP address to the clients attempting to connect. The second modules needed are means to host the questionnaires, quizzes, or alike which is web based on this work. Apache2 as the web server to show the electronic questionnaire and MySQL as the database server to store the inputed data from clients. In this work, CMS Limesurvey is used to manage the local questionnaires, a sample screenshot is available on Figure 2. The third modules is not essential but eases the clients on the attempting process which are DNS server Dnsmasq to resolve all domain name to the local survey website and Iptables to redirect if the server is connected to The Internet, simply they function as a landing page in order to automatically direct clients to the questionnaire’s location when they open their browsers. If not, we have to tell them beforehand and let them find the location manually. With all of this done the Raspberry Pi will function as a hand carry server. SimulationSmall simulations or trials were carried where there was 1 surveyor and he surveyed 30 people simultaneously. The surveyor is one of our lab members name Elphas Lisalitsa, and it is also fortunate that he never heard of Raspberry Pi when we approached him, which is good that the feedback of using this method can be more objective. It is conditioned that the surveyor knew how to do this method including using Limesurvey CMS. Before conducting the trial the surveyors are trained to do this method which fortunately only took one time that only last few minutes, since it’s quite unfair that the surveyor is versed in computer literacy, meaning that he is already skilled in creating questionnaire using document editor softwares and printing them. As he is already versed in using Microsoft Word, Libreoffice Writer, and similar softwares it is fair that he should also be versed in using our method. Imagine if a person does not know how to use Libreoffice Writer, he/she will take a long time to make this questionnaire, which is the same story of not knowing to do this method. The first experiment was the conventional one where they use paper based which the process includes writing 29 item questionnaires, printing them out, handing them to the participants, collecting them back, finally inputing them on the database. The second experiment is using our method which the process includes writing 29 item web based questionnaires, starting the device, asking the participants to connect and answer the questions. Due to some current limitations, field survey cannot be conducted but simulation with 29 virtual users provided and 1 real user attempted the survey on the Raspberry Pi. The same can be said for paper based where only distributing and collecting the papers are simulated with only single participants answering the questions. In the end the surveyor will be asked to compare the convenience of both methods. The questionnaire items were based on a survey of MOOC readiness survey in high schools and a national University in Mongolia containing 18 five point likert scale questions, 5 yes or no questions, 4 multiple choice questions, and 2 fill in question, also 633 words with 3628 characters. The survey was lead by our peer Otgontsetseg Sukhbaatar. For further simulation, stress testing was conducted to see if it could handle up to one hundred users. Unfortunately as stated before that the authors did not have a real testing ground, instead a simulation is carried using Funkload a web stress testing application (Delbosc, 2017) from another powerful computer to simulate a hundred virtual users at the same time accessing and conducting the survey. The application was able to record the activities on the browser starting from accessing the survey, answering questions, then viewing current results, and later to be replayed in benchmarking mode to include more virtual users. CPU and memory usage, and power delivery were also measured, but more importantly the response time. ResultData Collection ProcessFigure 3 shows the time consumption of both method showing little difference on preliminary and during data collection process. During the preliminary data collection process, the conventional method starts of by opening Libreoffice Writer, then writing 29 questions which took 33 minutes. Next printing the questionnaires of 3 pages for 30 people using OKI C332 fast printing machine which took as quickest of a second per page and everything took roughly 1 minute and 30 seconds assuming that it had the capabilities of automatic stampling. Using old printers may take much longer. Also the more the paper the heavier the weight, while Raspberry Pi only weights 45g. Making questionnaires on Raspberry Pi solely depends on what application was used, on this case is using Limesurvey LMS. The time consumptions can be divided into two which are typing the questions and delays from the web system with detailed data showed on Figure 4. Using developer tools available on all browsers the process of questionnaire creation can be monitored in detail. It can be summarize that delays from the web like loading and scripting took a minute and 28 seconds while typing the questions itself took 34 minutes and 27 seconds. For paper based the issue is the needs to produce hard copy which contributes time needed for printing, while for this method depends on the hardware and software capabilities of the server and/or client if chosen to work remotely. With greater capabilities it can lessen the web delays such as loading page, and vice versa that more lags will occur with lower capabilities. For data collection process, it is the manual labor that is needed to be worried for paper based method which are distributing questionnaires and collecting them back while for Raspberry Pi based is dependent on its computer capabilities where the more the user, the more its performance degrades, more details are discussed on next subsection, also the capabilities of the client’s computer device influences. For paper based, distributing questionnaire took a minute 15 seconds and collecting them back took a minute 10 seconds. For this method the time to connect is a minute and 2 seconds and the web delay is 11 seconds tested for one user with 29 virtual users logged in (this result is highly related to Figure 6). As for answering the questions itself there is little difference where for paper based took 2 minutes and 54 seconds while for this method took 2 minutes and 59 seconds. Finally the post data collecting process is where the advantage of this work’s method was shown. An extra process will have to be taken if using the conventional method which is inputting the data to the database. On Figure 3 is assuming the fastest semi-automatic way using machines which are a scanner to scan the answers and optical character reader (OCR) to read the answers to be automatically put into the database like on English tests or national examination tests which took 7 minutes and 30 seconds for 90 pages of responses (3 pages multiplied by 30 people), with our scanner Epson ES-H300 was able to handle 5 seconds per page. Thought most surveyor does not have this technology and manually types them one by one which can take a lot more time, also usually two people are assigned doing the exact same thing for which in the end their answers to be cross checked with each other to mitigate human errors. Note that this have not include generating graph like analysis. Even so the hand carry server method surpasses those methods (whether manual or using machines like scanner) that can input and generate analysis with graphs the instance the participants answers the questionnaire. This made clicker possible to be implemented which is like polls on television shows. The page on Figure 5 showing the statistic have to be refreshed manually everytime to show latest results but this depends on the services provided by the LMS, though a bit implementation of asynchronous JavaScript and XML (AJAX) or the newest method JavaScript Object Notation (JSON) can make it more real time where the page updates automatically. In short this process can be a heavy burden for the surveyor if using paper based while using this method there is no need to go through this process which can save quite a lot of labor energy and time. In the end, the total time consumption on Figure 3 is shorter because for this method because it does not need to go through post data collecting process. Device’s Performance MeasurementAs said on the previous section the authors currently unable to conduct larger field testing, therefore a simulation was done instead using Funkload to simulate up to a hundred virtual users conducting this survey. From Nah (2007) a tolerable waiting time for information retrieval is approximately 2 seconds, and from Baily (2001) around 5 seconds is still ok, and 10 seconds is the maximum. For this work 10 seconds response time was taken as the maximum limit. Figure 6 showed the response time when 1 up to 100 virtual users attempted the survey. This can be said the worst case scenario since the users access the survey at the exact same time meaning all questions multiplied by up to 100 was loaded and all answers multiplied by up to 100 was submitted instantaneously. It is called worst case scenario since loading and submission at the same time almost never happen, in real scenario is a random probability which the load is always lighter. Because of this the data obtained was quite unexpected showing that it was too much to handle 100 virtual users simultaneously loading and submitting 30 questions (extra fake question to round the number) and answers as described of the questionnaire on the previous section. Therefore more experiments results with fewer questionnaire items were added which are 5, 10, and 20 items. For the real case of 30 items, if guaranteed below 10 seconds response time is seek then 10 users at a time is the maximum, if average of 10 seconds is still okay then it can handle up to 30 users (matches quite well with Figure 4). If longer time is alright then it can take up to 85 users before failure occurs and finally the service broke after 90 virtual users where restart of web and database server was required. Though fewer questionnaire items allows faster response time. For items of 20, 10, and 5 the maximum of 10 seconds occurred respectively at 15, 25, and 30 virtual users, while average of 10 seconds occurred at 45, 70, and 100 virtual users. Why does the number of items relates to response time? Because the user will have to load the items on the web browser when attempting the survey. To be more specific, the user requests and the web server transmits, and the more the items, the more transmission took place. Also after the attempt the users will have to send its response where the more the items the more the responses that must be sent. Again Figure 6 showed the worst case where all users requests all the items and returns all its responses at the same where this case is almost unreal. Therefore more user capacity might actually be available, but referring to the data as the limit may proof reliable judgment. In short it is guaranteed. To get the CPU and memory usage an application called Vmstat is used and ran every seconds printing the current CPU and memory usage. The method of calculation was how much of CPU and memory was free differentiate from the total CPU and memory available. Figure 7 showed the measurement that during survey creation the CPU usage was below 40% and memory usage was below 500 MB. It is expected less resource is used since only one user is creating the survey. However during survey attempt the CPU usage was mostly above 80% and memory usage was mostly above 600 MB, which is because 30 users are attempting at the same time with questionnaire of 30 items. The explanation is almost the same as response time that more computer resource is needed to allow more user attempts and more questionnaire items. The energy consumption is measured based on how much was consumed on the power bank. The powerbank has a specification of 20000 milliampere hour (mAh). After going through all the process on Figure 3 the percentage showing on the powerbank’s monitor drops from 100%-97% meaning only using 3% and the calculation is on Equation 1 showing 0.6Ah in 39 minutes. In an hour it should use 0.92Ah which the result is quite matching to the experiment done on “Raspberry Pi FAQs” (2016). The voltage was 5 volts (V) which the current delivery will be 0.92Ah multplied by 5V and will be 4.6 watt hour (wh). In the end power delivery is not a big deal. Conclusion and Future WorkThis work shows that the hand carry server method was more convenient than the paper based method. For the time consumptions comparing the two methods, this work’s method’s showed faster result since less manual labor are done. The advantage of this work’s method is visible in post data collecting process which can provide automatic insertion of responses to the database and instantly display them in statistics in realtime. Although that it provides great convenience there are limitation due to the resource available on the hand carry server. With 5, 10, 20, and 30 number of questions on the survey, it can be guaranteed that the response time will not exceed 10 seconds if users does not exceed respectively 35, 25, 15, and 10. If beyond that is still tolerable, then the simulation showed that the average response time of 10 seconds occurred at number of virtual users of 100, 70, 45, and 30 when there are 5, 10, 20, and 30 items in the questionnaire. It’s the same for CPU and memory usage that it was mostly consumed when number users is above 30 with each loading 30 items of questionnaire. If it is just a class with average number people the device can handle it. This work showed only some applicative which introduces the idea and yet to be implemented. There are room for improvements on its data structure, and performance. Another issue which is yet to be discussed is the privacy and reliability, for example its susceptibility to data loss and failures. Additionally synchronization may also be discussed starting from the hand carry device to the main server, then between hand carry devices if more than one is used for a survey, like how to combine the data together. In the future will also try using other more popular hand carry devices such as mobile phone whether it is possible for it to function as a portable server such as the one on this work and compare those ones with this work. AcknowledgmentPart of this work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research 25280124 and 15H02795. The authors would like to thank Elphas Lisalitsa for willing to be a subject for the trial as a surveyor, which he was burdened with doing two kind of surveys that were paper based method and hand carry based method, starting from creating the questions, collecting the data, and inputing the data. The authors would also like to thank Otgontsetseg Sukhbaatar for providing us her questionnaire items and informing us about her survey experience using paper based in some high schools in Mongolia. Reference
Mirror
1 Comment
Rdiff and Rsync Implementation on Moodle's Backup and Restore Feature of Course Synchronization over The Network Presentation from Fajar Purnama
Note
AbstractE-learning has been widely implemented in educations system. Most higher institutions have applied Learning Management Systems (LMSs) to manage their online courses, with Moodle as one of the most favored LMS. However on the other side creating a well designed and written course remains problematic for teachers. That's why the community encourages them to share their courses for others to reuse. The authors or teachers then will continuously revise their courses, that will make subscribers to re-download the whole course again, which will soon lead to exhaustive network usage. To cope with this issue a synchronization model of course's backup file is proposed, retrieving the differential updates only. This paper proposed the synchronization of the existing backup and restore features. The file synchronization is performed between course's backup files based on rsync algorithm. The experiment was conducted on virtual machine, local network, and public network. The result showed lower network traffic compared to the conventional sharing method just like our previous synchronization method. However unlike the previous one this method had two other additional advantages which are the flexibility to control the synchronization content and compatibility to all versions of Moodle. IntroductionIt is very common today to deliver education using electronic devices referred as e-learning. An advance application system that could manage e-learning known as LMS are widely use in higher educations. Modular Object-Oriented Dynamic Learning (Moodle) is one of the most popular and preferred LMS to deliver courses online. Many higher institutions in one of the author's country origin had implemented Moodle as their LMS [1] and discussed the problems that had been faced by the country's students. The authors on [2] have investigated the readiness of elearning implementation in Sam Ratulangi University. Implementation of mobile learning on GPRS network has been assessed in [3]. With many research on e-learning have been initiated, thus it's likely to see more Universities will implements e-learning soon. No doubt that the students are fortunate being given more flexibility. With just a computer device and Internet connection they are able to attempt these online courses without being limited by the boundaries of place and time. It's also very flexible on the teacher's side where they could prepare their courses before hand and give feedbacks to students on their leisure time. However designing and writing a good content may not be easy. It takes experiences and time to make a well designed and written one. Some special contents may only be correctly written by Professors. For this occasion Moodle encourages course sharing as stated in [4]. There are many other sites that provides backups of courses deployable on Moodle. As time passes another problem was encountered, that constant revision will inevitably occur when perfecting a course. In addition with today's multimedia technologies, for example the course's creator might consider adding videos on their courses, which makes it common to see a very large backup course in terms of filesize. The problem became more seriously as the survey result on [1] on 10 different universities in Indonesia shows that Internet connection as one of the major obstacles faced when implementing e-learning. To overcome the constant revision on the course contents and Internet connection problem, the work in [5] proposed course content synchronization. With this method there's no need to redownload the whole course whenever it is revised, but retrieve the revised part only. The application was created for Moodle version 1.9, and therefore it is needed to develop another one that is compatible for later version of Moodle as the next work on [6]. Those previous methods converts the course's database and directories into blocks and calculate the difference remotely between the outdated and latest course. In other words the previous application also handles the exports and imports of courses. This leads to an issue where a new application needs to be created everytime the structure on Moodle changes. Moodle already have a course backup and restore feature and therefore it's better to let Moodle handle that part and only focus on the synchronization. This will lead to an application compatible with all versions of Moodle. Also the existing feature provides more flexibility of what contents to be synchronized. With that this paper proposed a file synchronization between course's backup archive based on rsync algorithm that can calculate the difference of a files remotely. Figure 1 is the general framework of the proposed method where we only need to send a reference of the outdated backup archive and use it to create a patch. Thus the objective of this research is to develop a course synchronization application that is compatible with all version of Moodle. Related WorkCourse SharingThe introduction of the term massive online open course (MOOC) was the starting point where lots of online courses became open via web and allows unlimited participants. As for Moodle's case it is the teaching with Moodle MOOC [4] on Moodle HQ. Thousands of educators from around the globe have taken this MOOC and introduced to Moodle both as a user and as a course creator. It is still running periodically up to today. The participants are encouraged to share their courses on [7]. On that website visitors may try online courses or download them as .mbz format which is an output from Moodle's course backup and restore feature, and that is not the only website that has online course sharing. Course SynchronizationAs the authors on [5] wanted to implement distributed LMS for higher institutions in Indonesia, using their proposed method to distribute courses, was not entirely possible due to the band limited network connection or low capacity of Internet connection. When facing with education's curriculum, developing online courses takes continuous and countless revisions. This forces redistribution of the courses again and it heavily burdens the network capacity. The general framework of the previous synchronization method on both master and slave LMS side consists of Moodle table and synchronization table which was a conversion of Moodle table into blocks containing sets of ID, hash, and version information. It is between these 2 synchronization tables that the synchronization occurs. At first a version matching takes place. If the slave side is outdated, block matching takes place. If new informations exists on the master LMS, than that information will be added to the slave LMS, the instruction will be marked as "append". If informations on slave LMS doesn't exist on the master LMS then it will be deleted, thus the instruction will be marked as "delete". Finally if informations exist on both sides but different mapping, the instruction will be marked as "update". Overall the synchronization has three main steps. Other than the database, this applies to the course's directory as well. With that algorithm a standalone application was written in PHP, and compatible with Moodle version 1.9. The experiment was conducted between Institut Teknologi Sepuluh November (ITS) Surabaya, Indonesia, and Kumamoto University, Kyushu, Japan, and showed a low network traffic usage. File SynchronizationThe courses are shared as a backup archive in .mbz format and our method applies remote file synchronization on the transmission process, by utilizing rsync algorithm. The common file patching system needs the two files, i.e. an unrevised file and a revised file on the same system in order to create a patch for the previous version file. Uniquely rsync can perfom this remotely. Suppose that there are two LMSs, one is on the master side and the other is on the slave side. The masterside has the latest backup fileα while the slave side has the outdated backup fileβ. Based on [8] it is possible to updateβ to the latest revisionα with the following steps: (1) the slaveside splitsβ into series of non-overlapping fixed-sized blocks that had the same size, with the last block may have the same equal size or smaller, (2) a weak “rolling” 32-bit checksum and a strong 128-bit MD4 checksum, total 2 checksums are calculated for every blocks inβ, (3) the checksums are sent to the master side, (4) the master side searches α to find all blocks at any offset that have the same weak and strong checksumas one in the blocks of β, and (5) the master side sends a sequence of instructions to the slave side to construct a copyof α which can either be instructions refering blocks on β or data retrieved fromαthat does not match on any blocks on β. The name rsync itself is an application already installed in most Linux distribution. It is said on the manual page [9] as a fast extraordinarily versatile file copying tool that could replace conventional copying because it sends not the whole file but the difference between existing file. On this paper thought will be using rdiff, it is an application to generate difference between two binary files based on rsync algorithm. Basically it is an rsync implementation but gives more control than the existing rsync application. Rdiff is part of the package librsync [10]. Another application that will be used is rdiffdir, since the course's backup file is an archive. Rdiffdir is directory synchronization version of rdiff which is included in duplicity package [11]. ExperimentBackup and Restore FeatureMoodle has a course backup and restore feature that could do backup on a course into .mbz format. Users with previleges are given almost full control of what to backup from the course. Starting from whether to include users, anonym users, or no users at all, until backing up full content or certain parts of the contents only. This can be shown from a menu screenshot on Figure 2, and Figure 6 which is also our course design that shows capability of choosing certain sections to backup. In addition the restore feature gives the same menu. From Moodle's documentation [12] is also possible to alter the backup file for advance used. Synchronization MethodAs stated on the previous section the experiments uses rdiff rather than rsync directly because it's still not common sharing backup course over rsync daemon or SSH, but very common over hyper text transfer protocol (HTTP). The slave side will generate a signature file of its course's backup archive and sends it to the master. The master side will use the received signature file and its course's backup archive to compute the delta file which can also be said as a patch file for the slave side course's backup archive. The master side will return a delta file to the slave side, and the slave side will generate the latest version of the course's backup archive. Overall it can be illustrated on Figure 3. There will be two kinds of synchronization demonstrated. One will directly synchronize the backup archive using rdiff, and the other one will synchronize each file inside the backup archive recursively using rdiffdir. Unlike the first one which is purely binary file synchronization master's and slave's side course backup archive, the second one is more to course synchronization. The inside of the course's backup archive can be seen on 4. The "activity" folder contains forums, lessons, and quizzes alike. The "course" folder contains more of the course's settings. The "files" folder contains materials uploaded for the course, and the "section" folder defines each section on the course. Rdiffdir will recursively perform rdiff on those files. The result of rdiffdir is shown on Figure 5 where the difference of each file resides on the "diffs" folder, new added files on master side on the "snapshots" folder, and instructions to delete files that was deleted on master side on the "deleted" folder. ScenariosThe experiment uses the main author's own developed course in Moodle version 3.0 as a material which has three large sections (topics) as seen in Figure 6. We also made the course available on [13], by login as username "teacher" and password "teacher". The experiment has seven scenarios where scenario 1 without sychronization and the others with synchronization as follows: (1) retrieving the whole course's backup file (conventional sharing), (2) large content addition on the master side (slave side only have 1 section), (3) medium content addition on the master side (slave side has 2 sections), (4) small content addition on the master side (adding an url module), (5) small change on the master side (modifying a text on one of the course outline module), (6) section order change on the master side (section 2 shifts to section 1, section 3 shifts to section 2, and section 1 shifts to section 3), (7) no change on the master side. Moreover the scenarios are conducted on 3 situations: (a) local machine and virtual machine, (b) local area network (LAN), and (c) public network on [14]. The local machine acts as the slave side while the other as the master side. Very simple php scripts are written to perform the synchronization as seen on illustration on Figure 3. Then the total sent and received traffic is measured using a packet capture tool Wireshark that will be discussed on the next section. ResultThe first subsection Demonstration shows that the developed application utilize the output of Moodle's course backup and restore feature. Unlike the previous applications on [5] and [6] they are not responsible for exporting and importing courses, but rely on the internal feature in Moodle. This makes this paper's synchronization application compatible with existing and upcoming versions of Moodle. The second subsection Measurement Result shows that the application functions as a synchronizer like the previous applications on [5] and [6] by showing network efficiencies during transmissions. DemonstrationWe made the PHP scripts available on [15]. The first draft developed has given a feature to the users on both master and slave to dump their own backup course archive in .mbz format. What information existed on the backup archive depends on what options are used on Moodle's backup and restore feature. We utilize common PHP file upload script that can be found in many tutorial on the web, except for this experiment the file will be automatically renamed into "backup.mbz". The demonstration that is shown on this section is for scenario 2, Figure 7 is the console for both master and slave LMSs to initially dump their backup course. As seen on the slave side the outdated "backup.mbz" file has a size around 16 MB where it only contains the first section of the course on Figure 6 (a). The next step should be clicking the update button. The update button contains instruction to generate a "backup.mbz.sig" signature file from "backup.mbz" archive using the rdiff command, then send the "backup.mbz.sig" to master LMS url stated on the script written in curl PHP. The script to accept the file on the master LMS (the same common upload script in PHP) activates and do an extra instruction written to generate a delta (patch) file, with "backup.mbz.sig" and the master side's "backup.mbz" as inputs. The next step is to send the generated patch file "backup.mbz.delta" to the slave LMS. For that we invoke a script on the slave LMS to download the "backup.mbz.delta" written in curl PHP. On that script also contains instruction to backup the previous "backup.mbz" into "backup.mbz.backup" and apply patching using rdiff command to update the "backup.mbz" using "backup.mbz.delta" as input. Finally Figure 8 shows the updated "backup.mbz" that has a new file size of 30 MB which includes all contents as seen in Figure 6. It is also shown that the "backup.mbz.sig" has a size around 16 kB and size of "backup.mbz.delta" is around 23 MB. The overall process is then repeated for each scenario. The second draft is similar to the first one except it implements rdiffdir. It shows signature file around 1.5 MB and delta file around 16 MB for scenario one. During the synchronization process the "backup.mbz" archive on both master and slave side are extracted into a folder named "backup". Starting on the slave side rdiffdir recursively generates signatures for each files on "backup" and stored it as an archive "backup.sig". The "backup.sig" is then sent to the master side and to be used as a reference to recursively produce deltas for each file on the master's side "backup" folder and store the deltas into an archive "backup.delta". Next the "backup.delta" is sent to the slave side and patch the "backup" folder, and finally recompressed into an archive "backup.mbz". Measurement ResultsThe experiment was conducted by sending the signature file which influences the outgoing network traffic and retrieving the delta file which influence the incoming network traffic. The first experiment synchronizes the course's backup archive directly with rdiff on Figure 9 and the second experiment synchronizes each files contained within the course's backup archive with rdiffdir on Figure 10. The signature file was roughly produced around 200 kB and the delta file was around 20 MB. The first scenario (without synchronization) downloaded the whole course's backup file which had a file size around 30 MB, and the other scenarios (with synchronization) downloaded only the difference generated by rdiff. The overall result shows that using the proposed method is more efficient than doing the conventional way (scenario 1). On this case the slave side consumes total amount of traffic around 30 MB when not using synchronization, and consumes total amount of traffic around 20 MB when using synchronization. The proposed method proves that there is an efficiency of 10 MB of network capacity in term of bandwidth. For scenario 2 and 3 the outdated courses have a considerable amount of difference between the latest course and the results proves that it is very beneficial for this case. For scenario 4, 5, and 6 the outdated courses have a very few differences between the latest course, but the result shows around 20 MB of network consumption which is very high for this case. This is due to synchronizing while both archives are still compressed. The second experiment on the other hand decompresses both archives and synchronizes each files within which is more accurate for course synchronization. Scenario 4, 5, and 6 only makes small changes on the course's contents which made the incoming network consumption also small, around 1.5 MB. It's a very large efficiency compared to the first synchronization experiment, although the outgoing traffic increases which is due to high number of signature files. Either way both experiment results are better than without synchronization process. The last scenario shows very low traffic due to the course's backup file on the slave side is up to date with the master side, so no update is required. Since the measurement is based on the outgoing and incoming traffic it is logical that the public network shows a slightly higher traffic than between virtual machines and on local area network. Conclusion and Future WorkLike the previous method of course synchronization the proposed method of rdiff and rsync utilization for backup archive both in master and slave sides saved the network consumption for the course sharing using Moodle, except there were two other merits than to the previous method. The first one was the flexibility to configure the course's contents to be synchronized, and the second one was time efficiency since no adaption process of application of the proposed method was needed when the version Moodle changes, however both of them were not fully demonstrated on this paper. Therefore in the future we will further develop its compatibility and demonstrate on all version of Moodle and other LMSs. The method also gives possibility to develop partial course synchronization. AcknowledgmentPart of this work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research 25280124 and 15H02795. Reference
Incremental synchronization-implementation-on-survey-using-hand-carry-server-raspberry-pi from Fajar Purnama
Note
Default CopyrightA copyright is the right to copy an intellectual property. By default, the copyright belongs to the creator with the requirement that the creator's name is labelled on the intellectual property. Anyone else who wants to use or copy the work must have permission from the copyright holder. The copyright holder can also open the work by changing the right to creative common or give up the right entirely by labelling the work as public domain. Copyright TransferA copyright transfer is transferring a copyright holding to another party. The main author loses the authority so why would anyone want to do this? Generally, for marketing, the author may not have the capability to sell their work. Therefore, they rely on publishers and depends on the contract, the author and publisher splits the profit. On the academia side, authors needs reputation where they will try to have their work published in top journals, proceedings, or reports. Why not do it themselves? Well, it is a big extra effort building the work's reputation and generally, researchers only wants to focus creating and writing, and don't want to be burden with anything else. Top journals and proceedings provides peer review that controls the quality and polishes works. They have great marketing, many audience, great quality, reputation and trust, wide network, many professionals, etc. If you decide to publish yourself, you need to build everything from a scratch. After Copyright TransferAfter copyright transfer, you lose right to the work. The copyright is now at the other party and they decide the permissions regarding your work. They can give you full permission but usually they only give you partial permission but you are still the author of the work eventhough you don't hold copyright anymore. Can you share your work? It depends on the party you give the copyright to. If you don't know, you better ask them. If they publicly state that they don't allow to share, you need to ask them for permission and negotiate. Need AdviceI will share my works on personal websites and blogs since they allow me too however their definition is vague. If it is a server that I have physical access to then it is strictly clear. However, if it is a server where the author is allowed to upload and delete files without the consent of others (e.g., a blog or the server of a university department, preprint server), on blogger, github, and publish0x I can post and delete as I want, but I don't own the platform and they may revoke my right such as being banned or maybe my understanding is wrong that whenever I post and delete is actually based on those platforms' consent. Please leave a comment if you understand. In my opinion, I think the message is, if the copyright holder requests to delete my post, I can delete immediately, and that is what matters. So, what happens if I am banned on those platforms? Will my posts be deleted or will they remain? If they will be deleted then I'm confident in posting. Please leave a comment if you know the answer. Mirror
Is Zero Electricity Cost Cryptocurrency Mining Possible? Solar Power Bank on Single Board Computers26/5/2020
Recruiting future backup cryptocurrency miners with solar power bank on single board computers from Fajar Purnama
AuthorFajar Purnama, Irwansyah, Muhammad Bagus Andra, and Tsuyoshi Usagawa Note
AbstractBitcoin reaches $10000 per coins again, other cryptocurrency coinsâ value also drastically increases, but it does not mean that mining became profitable on personal level. The cost of electricity and internet remains a liability in households, but what if there is a method to zero that electricity running cost? The authors came up with an innovation of using solar panels to generate the electricity but even more so, a practical method that could easily be followed by average people. That method is the combination of solar panel, USB power bank, and USB powered computer devices which are usually smartphones and single board computers. The solar panel converts sun light into electricity and the power bank serves as the battery to store it which todayâs available power bank is able to power USB powered computer devices. This article contains a mixed short discussion of economics, environments, and innovative technologies. IntroductionIt has been 11 years since Satoshi Nakamoto publish the bitcoin whitepaper [1]. Bitcoin made into the spotlight at the end of 2017 where the price of bitcoin peaked up to $20000 per coin. The bubble burst then which the price dropped down to $3000. At the writing of this article, the price soars once again to $10000. The rising price attracts many investors and the volatility attracts many traders. In other words, many people seek to own bitcoin and other cryptocurrency coins for profit. Originally, these cryptocurrency coins were not meant as an investment instrument but a novel method for electronic transactions. While common electronic transaction needs a third party like banks and any other financial institutions to verify the transaction, cryptocurrency coins do not need a third party. However, this is a discussion for another time due to the limited space of this article. Straight to the point, this article discusses methods to get profitability from mining. The technical detail is too much to be discussed on this article but financially, mining is the process of obtaining cryptocurrency coin by donating computational power to the network. Electricity cost is the biggest problem therefore, majority of miners seeks a renewable source of energy such as hydro, solar, and wind [3]. This article implements the solar energy for electricity generation but different from other, this work is scaled to household size and primarily target general public. Since the targets are households, the objective of this work is to assemble a solar powered mining machine where the materials are easy to get and the methods are easy to follow. This article innovation is a solar power bank USB powered computer devices, due to limited space of this article, only a single board computer brand Asus Tinker Board (ATB) is demonstrated. Further discussion is about how profitable this innovation is. Materials and MethodTable 1. Materials necessary to execute the concept of this work.
The first step is to build the device. The materials necessary can be referred at Table 1 which can be bought at an electronic shop or online shop. Once the materials are available, they should be assembled as shown on Figure 1. The solar panel is used to charge the power bank and should be exposed to sunlight. The power bank should be used to power the USB computer devices and the device that provides Internet connection if necessary. The second step is to build the software. Although other computers and accessories are not necessary during mining, they are necessary during building the software. Generally, there are four steps in building the software which are 1) installing the operating system, 2) installing the miner and its dependencies, 3) choose a coin to mine, 4) joining a pool or setup solo mining, and 5) creating a cryptocurrency wallet. The third step is mining which is the last step. DiscussionTable 2. Asus tinker board average resource consumption.
This discussion contains the limit of the solar panel, the overall resource usage of mining, and the financial report. On Table 2, the power consumption shows that the power bank on Table 1 can last from 12 to 33 hours. The solar panel in average will take 30 hours for the power bank to fully charge. During mining, the power usage on Table 2 is larger than the generated power by the solar panel on Table 3 which makes charging on the fly less recommendable. Table 3. Solar power generated daily.
The financial report is the main interest for the public where the main question is how profitable this method is which is described on Table 4. The only asset is the computer device itself which generates income while the others are liabilities which are the running cost where the popular ones are electricity and Internet cost. The variables that determines the mining income are described on Table 5 where all variables are dependent on the coin where on this case, Litecoin is used. Table 4. Profitability table.
Table 5. Variables that affects mining income.
The hash rate is dependent on the hardware and software where higher hash rate means higher income. The block difficulty depends on the total miners but more accurately the total hash rate on the network. From financial point of view, the block difficulty represents the competition where the higher the block difficulty, the less the income. The block reward is the reward for solving blocks where higher reward means higher income. The coin value or coin price is a highly debated topic up to today. Discussing the correct value of coins is too much to be put on this article. For now, this article refers the coin price to united state dollars (USD). The formula to calculate the amount of bitcoin obtained from mining can be seen on Formula 1. For other coins, the formula can be slightly different but should follow similar concept to Formula 1. Expected Payout in BTC = HtB/223D [3] (1) H = hashrate, T = time, B = block reward, D = block difficulty he main discussion of this article is about Table 6. Table 6 shows how much money can be earned using this articles method, but the Internet cost is omitted for this article to limit complication because in reality the Internet is not only used for mining but also for all other activities. Additionally, the profit of regular mining by paying electricity is compared to using this articleâs method by generating own electricity with solar panel power bank. For regular mining instead is not a profit but a loss. For mining with this articleâs method is profitable but limited to the daily mining time because of the generated power on Table 3 is not enough to run the mining for the whole day. From the data on Table 2 and Table 3, it is possible to calculate the daily mining time on Table 6. Thus, the daily income is the multiplication of Table 4 mining income converted in USD and the daily mining time on Table 6. The overall financial result of mining using this articleâs method is that the method is able to reap profit where usually it is not profitable. Table 6. Income rate of mining with paying electricity versus getting electricity from solar panel.
ConclusionThis article successfully implemented mining with a single board computer without paying electricity cost by harvesting solar energy. The method is well suited for households because the materials are affordable and easy to obtain, and the assembly process are not complicated. Although the financial report shows profit, but the profit is extremely small that it would take a year to obtain a dollar. The problem is Litecoin. In this work, Litecoin is chosen because it is mineable on all CPU, GPU, and ASIC. In reality, different hardware have different profitable coins for mining. For example mining Magicoin on CPU can profit $ 0.0026 a day which is 349 times more profitable that Litecoin. Other factors are speculative price of the coins, for example nobody predicted that the price of bitcoin could rise from $1 to $10000 in ten years. This work is only an introduction where many possibilities are not yet explored. Other than constantly searching and switching to the right coin to mine, device expansion may increase income. Also, there are other types of renewable energies that are still not utilized. Aside from financial, this innovation is good for education, cryptocurrency contribution, and hobby. Reference
Mirror
Demonstration on extending_the_pageview_feature_to_page_section_based_presentation from Fajar Purnama
AuthorFajar Purnama, Alvin Fungai, Tsuyoshi Usagawa. Note
AbstractThe Internet made it easier to access information almost anywhere at anytime. With all sorts of analytics such as pageview, behaviorist can see how users browsed the web. However there is a limit to how much data pageview analytic can provide. Pageview can answer what, when, and where a webpage is viewed but cannot answer how a page is viewed. Simply saying it cannot show the reading pattern of a user on a page. Therefore this work proposes to extend the tracking capability of the pageview feature where the monitoring is done as far as the sections of the page. The demonstrated web application developed consists of Javascript which provides the main feature of reading pattern tracking, Java to store the data into the database which later on can be used for analytics, and these were tested on a simple HTML page. The web application can show the date accessed to a particular section and the duration spent on that section by the user. It can also provide data that shows the reading pattern of a reader which in the future can be used for analysis by other researchers. IntroductionThe invention of the Internet had greatly change how people access, share, and store information. Before, people have to take a long walk to the library to read books, and attend seminars to learn from others. Today many people shares information on the Internet which is practically accessible almost anywhere at anytime with all each person needed was a computer device connected to the Internet. Not long afterward, online analytics or another name web analytics was introduced where we can see what visitors do on our webpage. One of an online analytic is pageview that can show the number, the duration, and variation of people reading the webpage. Conventionally, it can be seen how popular a book is by surveying how many copies were sold and how many have read them, but through online there is no longer restriction to place and time, and with the help of computers it is possible to get data that could determine the behavior of the readers. The field of education was also influenced by these technologies that sparks popular terms such as e-learning, online course, and learning analytics. Our colleges implemented e-learning on their respective universities and their research [1] [2] was able to determine whether the lectures and students were ready to use it, and their data can suggest the next course of action in order for the university to fully utilize e-learning. Another research in [3] studies about online discussion forums where they state that the design of the online course determines how active the students will be. Their data shows that assignments and quizzes decreases idleness and lurkers, and increases more active students. A research in [4] distinguished the learning patterns of those who fails and passes an online course, for example what materials students read, what exercises they attempted, and how often they perform online discussion before attempting an online exam. Their research contributes to the quality of education, but there are still limits of how far their data can envision since they consist only of contents viewed, discussion posted, assignments submitted, quizzes attempted, and scores which cannot determine their detailed reading patterns. When reading a person can either read everything in detail, skim through the page, or semi-skim reading only the headlines then read in detail if that person finds it interesting. A common a example is when researcher browse to publish manuscripts where they first only reads the title, then the abstract if the title is eye catching, next skim through the manuscript by reading the introduction, figures, and conclusion if the abstract is interesting, finally they read in detail if they find it important to them. Simply the answer of what, when, and where the contents are viewed is in hand, but how the contents are viewed still remains a question. To answer this question it is necessary to extend the pageview feature alike by extending its monitoring to as far as each section of the page/contents. This is one of many related learning analytics and online analytics work that introduces to a web application that can track the reading pattern of a user. It consists of a Javascript and Java tested on an Hyper Text Markup Language (HTML) page. However the objective of this work is only to demonstrate the simple Javascript, and web API created since it has not been tested on popular websites today. Though it is only tested on simple HTML, it can demonstrate recording of date accessed and the time spent by user on a particular section, which the idea is quite new today. Related WorkOne of the very basic that all users on the Internet know is browser history which hold records of the date and time of websites that they visited. Software developers developed more advance tools. A Chrome Browser plugin called Timestats [5] shows statistical and timeline analysis of the websites users visited showing their browsing behaviors such as how long they have been browsing for the day and which website was frequently visited presented in graph and pie chart alike. Relic Browser [6] was able to record user clicks and scrolls, and if desired, typings on keyboard which all of them can show how users responded to a webpage. Another application called CA App Real Browser Monitoring [7] which can playback some browsing based on the events recorded, this is very close to what this work wants to achieve. On another side there are applications developed by researchers from learning analytics field. One of our colleges [8] [9] developed an open textbook analytic tool that can record studentsâ actions for example page flipping, bookmarks, links clicked, notes created, and time spent on chapters and pages, [10] is also quite related but on e-magazines stating that it is able to show reading habits. On our perspective they desired to achieve a section based monitoring similar to this work but they used pages like on electronic textbook to divide contents into sections, however this work intends to monitor deeper than that. The closest and might be better to this work is Finger Trail Learning System (FTLS) [11] that records the mouse cursors trails. However it forces the users to trail on each characters on a text in order for it to appear and be highlighted, while this work does not force the users to do such but just browse normally. The authors made an introductory work on [12] but only highlight the main idea, on this work a detailed discussion will be made. Web ApplicationArchitecture OverviewTThe web application architecture can be seen on Fig. 1 which consists of representation interface, web application programming interface (API), and a database. The representation interface tracks the reading pattern of a user on this case the date and duration on a section which is state of the art of this work. The web API is quite common that retrieves the data recorded by the representation interface, stores it on the database, and may process the data to show the statistics. Together they form a complete web application which the structure is quite standard and simple but may require additional feature depending on the implementation. Next subsections explains in details the method of the representation interface and the web API but the full source code can only be viewed on the link [13]. Representation InterfaceListing I Concept of Tracker Code 1: <form id=âsectâ action="thewebAPI" 2: onmouseleave="submitFunction(sect)"> 3: <div id=âsectâ onmouseenter="startCount(sect)" 4: onmouseleave="stopCount(sect)"> 5: 6: <input type="text" id="date<sect>" name="date"> 7: <input type="text" id="duration<sect>" name="duration"> 8: 9: Section Contents 10: 11: </div> </form> The main idea is mostly on the representation interface where the user reading pattern is captured. This can be achieve by embedding client side programming language on this case Javascript is used on the content. Listing I shows the concept of applying tracker code to a section content of a webpage. On line 9 represents a certain content of a webpage and clad it in a tracker code to capture the date and duration when a user is reading the content. For this work the mouse cursor is used to indicate which section the user is currently reading. On line 3 the variable âonmouseenterâ on the âdivâ tag indicates to record the date and starts the timer through the âstartCount(sect)â customized function when the mouse cursor enters the section and its contents. On the other hand âonmouseleaveâ variable indicates to stop the recording when the mouse cursor leaves the section through the "stopCount(sect)" customized function. The time function is based on [14] with modification of resetting the timer when stop counting, get systemâs date when counting, and add the variables to use array where here the functions use the parameter âsectâ that will have different value for each section in order to separate the data recorded for each section. The âonmouseleaveâ on line 2 inside the âformâ tag indicates to submit the date and time spent on that section data to the web API when the mouse cursor leaves the section. To submit them, line 6 and 7 is necessary as for the standard writing of âformâ on HTML. Listing II Tracker Code Insertion Algorithm Input: parent_node : parent node in html body; section_node : parent node as section identifier; parent_node_name : the name of the parent_node; parent_node_length : total number of parent_node; defined_section_node_name : name of the node section_node; define_section_node_length : total number of section_node; 1: i = 0; j = 0; 2: create tracker_code(i); 3: insert tracker_code(i) before parent_node(j); 4: j = j+1 /*go to next parent_node because current parent 5: node is now tracker_code*/ 6: while parent_node_length > define_section_node_length+1 do 7: /* This process repeats until all parent_node is clad with 8: tracker code */ 9: if parent_node_name(j) == defined_section_node_name do 10: i = i+1 /*move to next tracker_code*/; 11: create tracker_code(i); 12: insert tracker_code(i) before parent_node(j) 13: j = j+1; 14: end if 15: move parent_node(j) into tracker_code(i) as child 16: /*the number of parent_node decreases and âjâ is currently 17: pointing to the next parent_node*/ 18: end while On the introductory work [11] was to insert the tracker code manually as on Listing I which will work on any webpage, while this work introduces a method to insert the tracking code dynamically using document object model (DOM) HTML with the algorithm shown on Listing II, though it is only tested on a simple HTML page which contains no other than âhtmlâ, âheadâ, âbodyâ, âh1â, âpâ, and âbrâ tags. Most HTML today is more complicated and may need extra method for it to work. The tracking code is all on Listing I except for line 9 and the goal is to insert them automatically. For a simple HTML the body usually contains nodes/tags âh1â for heading, âpâ for paragraph, âdivâ for sections, and etc. First, one of them have to be chosen as a section identifier called a section node. Second create a tracker code as many as the existing section node (on the algorithm is one additional because the starting node is usually not a section node). Third put the respective section node and its following nodes into their respective tracker code (the first section node and its following until the second section node into the first tracking code, then the second section node and its following until the third section node into the second tracking code, and so on). Web APIThe web APIâs job is to retrieve the tracking data (date accessed and time spent) of a section by the representation interface and store it into the database. Since capturing the tracking data uses a client side programming language, it is still stored on the client side, and therefore the client must handover the data to the server. The âformâ tag Listing I is to submit the data to the server, and the server must be prepared with a server side programming language which on this case Java is used. The first thing the program needs to do is to be able to read the parameter values âdateâ and âdurationâ from Listing I line 6 and line 7, and store it in a variable. The second thing is to make a connection to a database server, create a database if it is not created already, create a table, and store the values into the table. For the database server uses MySQL on this work. Afterwards another web API can be created to mine this data, present it like the ones on [5] which is beyond the scope of this work. DemonstrationThe demonstration is on Fig. 2 and Fig. 3. The HTML page normally consists only of headers and paragraphs. Once it is applied an external Javascript source, it is inserted the tracker code and for this demonstration it is shown timer count on that page. The first and second box contains the first name and last name of the user, followed by the third box the sectionâs name extracted from the headerâs text. Once the mouse cursor enters the first section as on Fig. 2 the date pops on the fourth box, and the timer on the fifth box and sixth box (fifth box was design for accumulated timing and sixth box for discrete timing which is not demonstrated here). Next when the mouse cursor moves to the next section, the timer on the previous section stops and the timer on this section starts as on Fig. 3. The page will submit the six data based on the six boxes everytime the mouse cursor leaves the section to a web API. The web API stores it on a database, and another simple web API can be created to show the data. An example output can seen on Fig. 4 where a raw output is shown from the databaseâs table. From the data can be said that the user starts reading from the first section of the context for 6 seconds, then moves to the next section and read there for 10 seconds, then moves on again. Normally when the duration is short, the user is skimming through the page, but when the duration is long, the user is focused on that section. Compared to using page view this data can show more information about userâs reading patterns. Conclusion and Future WorkThe web application was able to demonstrate recording the date accessed and time spent on a section of an HTML page by a user which can be used as an extension to pageview feature. The data obtained from the web application was able to show more reading pattern information and other researchers on related fields may use this on their analysis and may find new answers. Though the concepts were shown on this work but were not implemented yet on a real situation/webpage which leaves this for a future work. Further evaluation issues such as appearance, compatibility, and resource consumption issues still needs to be addressed. AcknowledgmentPart of this work was supported by JSPS KAKENHI Grant-in-Aid for Scientific Research 25280124 and 15H02795. Reference
Mirror
AuthorFajar Purnama, Alvin Fungai, Thinh Minh Do, Al Hafiz Akbar Maulana Siagan, Anwar Annas, Harry Susanto, Hendarmawan, Tsuyoshi Usagawa, Hiroshi Nakano. Note
AbstractMany researchers had used the page view feature to identify the behavior of online users. Evaluators on e-learning for example used content views, quiz attempts, and scores to analyze the students' performance. However, these data still lacks some details compare to thorough face to face evaluation. The data can answer what, when, and where, but cannot answer how a page is viewed. To answer that question it is needed to track as detail as the page section level. This introductory work shows a path to that answer by demonstrating tracking of the date and duration viewed of a HTML page section using Javascript. This extended feature will open up new possibilities for those researching the behaviors of online users alike. IntroductionWith today's information communication technology (ICT) it is very common for people to publish information online which can be viewed anywhere at anytime unlike the conventional hard copies such as books. This made a possibility for web page analytics (i.e. how long and how many times that page is viewed). These features greatly benefit the field of education, and birth popular terms like e-learning, online course, and learning analytic. Our colleges [1] [2] implemented e-learning on their respective universities and some schools, and they were able to monitor their students' performances at ease which were essential data for their research. Research in [3] focuses on the online discussion forums of students and distinguished between non-active, lurkers, and active students through the number of posts and post views. Another research in [4] shows the differences in learning patterns between students below and above average. Their research were able to greatly improve the quality of education, however their types of data still only consists to the likes of number of contents viewed, discussions posted, assignments submitted, quizzes attempted, and their scores, which still has not shown the detail behaviors or habits of those during face to face interaction. In simple terms, the data can answer what, when, and where the contents were viewed but cannot answer how the contents were viewed. The authors have the idea to extend the details of the data that can be collected like what part of the page is the person currently focusing on, how long did the user spend on that part, how many times did the user click, how often did the user scroll up and down, what did the user type, and other users interactions. Due to the limitation of this paper will only be shown a glimpse of this idea as stated on title of this article as introductory. This work will show that a client-side programming can be implemented on a web page to measure the time spent by a user on that particular section along with the accessed date. Thus the objective of this work is to demonstrate a web application that can track the date and duration viewed on sections of a page. Related WorkA related work that most people know is the browser history that contains what site and when we have previously visited. Projects in the ICT industry had built analytic tools for example TimeStat [5] a Chrome Browser plugin that can generate statistical graph alike that represents our browsing behavior including when and how long we spent on a page. A more advance tool example is Google Analytics [6] that could capture various user interaction such as mouse clicks, mouse scrolls, keyboard types, image viewed, videos played, and etc. There are also other works by researchers on learning analytics field. One of our colleges [7] on his open textbook analytic system framework was able to record students actions such as movements to a next/previous page, jumps to a chapter, link clicks, bookmarks and annotations, [8] is also very similar that claims to be able to identify readerâs reading habit on e-magazines, but both had not dive as far as this work's proposed state of the art which is tracking sections viewed on a page. One of the closest to this work is from [9] which they built a finger trail learning system (FTLS) where the users must scroll to every letters on the reading context. The letters will be highlighted once the pointer touches the letter. The work is very similar and maybe better but not the same. They introduces users reading habit by pointer trailing, while this work is about the time spent by users reading on particular sections. Application ArchitectureThe application architecture can be seen in Fig. 1. It consists of a representation interface, a web application program interface (API), and a database. The state of the art proposed method is on the representation side where a client-side programming is embedded on a web page to record the section page view event by the user. The other parts are web API and database to store, analyze and present the captured events which is common knowledge. The web API is a server-side programming language that can be Java, hypertext preprocessor (PHP), or any other languages that functions to put the captured events on the database, retrieve and present the data. Advanced analysis can be done on this side to represent the data statistically like in form of line graph for example. Finally the database is a place to store the captured events, which usually use query languages such as structured query language (SQL). The database can either be MySQL, MariaDB, or other known database applications. The prototype on this introductory work is a hyper text markup language (HTML) page embedded with a Javascript as the client-side programming language on the representation side. Due to the limited space of this work, it is presented only the part that could capture user's reading session on a particular section on List. 1. This introductory work relies on the mouse pointer or cursor as indicator of where the user is currently focusing. Sectioning is done by "div" tags, "onmouseenter" is a variable to execute function "startCount" to start the timer when a mouse pointer enters the section, and "onmouseleave" is a variable to execute function "stopCount" to stop the timer when a mouse pointer leaves the section. The parameter âsectâ inside each functions is to separate the output (duration, etc) for that particular section only. It is unfortunate that it cannot be revealed the timer Javascript function here, but it is a slight modification of [10]. Each sections are covered by "form" tags and as the mouse leaves the section ("onmouseleave") will submit the values user identification, the section, date, and duration of view to the web API. The web API will store those values to the database and can also view those store values. The web API is written in Java Servlet, uses Java Database Connectivity (JDBC), and the database uses MySQL. The full source code is available on [11] on GitHub, there we also provide a source code on another branch where the sectioning of âdivâ and âformâ tags are added automatically. Currently it is a separate Javascript code to detect âH1â (header) tags on a HTML file and put the sectionâs contents as child node on the sectioning tags âdivâ and âformâ. List. 1. Tracker code example on a section of a text <form id=âsectâ action="thewebAPI" onmouseleave="submitFunction(sect)"> <div id=âsectâ onmouseenter="startCount(sect)" onmouseleave="stopCount(sect)"> <h1> Section 1 </h1> <input type="hidden" id="value"> <p> This is section 1. </p> </div> </form> DemonstrationFig. 2. is a view of List. 1. The left figure shows that the timer starts when the mouse pointer is in the first section. Afterwards it stops when leaving section 1 and starts timer on section 2 when it enters it on the right figure. For both sections, the dates of the last time pointed by the pointer is also generated. Therefore Figure Fig. 2. demonstrate the possibility of tracking the time spent by a user on a particular section. Using a web API and a database can store and output the result, revealing when and how long a user spent on certain sections of the page as on Fig. 3. Conclusion and Future WorkThis introductory work demonstrated an application that could track the date and time spent on certain sections of a web page which will be an extended feature of currently existing page views feature. The data that can be obtained from this feature allows new possibility to those who researches the behaviors of users online, and may benefit those in the field of e-learning alike. On the next work will be implementing a module or plugin to content editors on content management systems (CMS) and learning management system (LMS) to provide a button to add this tracking feature on their contents. However, this may not be suitable for commercial use since the coding nature of web pages can be complicated and variative to implement this method. It is suggested in future work to develop a browser plugin that can track the window of the browser itself instead of the web pages. Reference
I will write the unique thing about Publish0x first. For me is a fantastic match because I get to follow my favorite crypto news and get paid. Although this platform is initiated by crypto enthusiasts, everyone else are welcome and maybe soon the contents will become more general. If you are reading this on Publish0x, I want to tell you that this article is also distributed to other platforms to promote Publish0x where you can see what they are in the end of this article. Token Withdrawal ProofIgor Tomić the COO at the time was kind enough to reward me $5 of LRC for following the Loopring.io experience article contest eventhough I was late. I requested LRC token withdrawal on 3th Mei 2020 and arrived at 5th Mei 2020 which the payout is every Monday. The transaction ID is 0xed3b0dfaf03545195c563a55fed1b2d9a6e1e91f0a9a56bfc707ad436cbb1795 where my receiving address is 0xcf354a0012160bc5dae441c49f0b2d7e4a4ffc96. At that time to withdraw, first go to settings, in wallet, input your Ethereum public address which I highly recommend an address that you hold the private key, second go to dashboard, third go to payments and request for withdrawal and wait. Ambassador ProgramIf you are interested to join Publish0x first as a reader and maybe afterwards as a writer, join using this link https://www.publish0x.com/register?a=4oeEw0Yb0B&tid=weebly if you don't have any referrer. Publish0x are few of many platforms that provides a unique way to refer people. As you can see from link above, I was a given a unique refferal identification (ID) of "?a=4oeEw0Yb0B" where I can attach to any Publish0x links to refer to someone to receive commissions even if they are not my articles. For example, if I find interesting articles, I can share to my friends or post to the social media while attaching my referral ID to the articles' links, and if someone registered using that link, I will receive a commission. The ambassador program is the term Publish0x uses where most other platform uses the term referral. I do suggest and hope that one day Publish0x can upgrade the referral system where not only the inviters but the invitees get commissions as well. It doesn't necessary have to create more expenses, but allow the inviters shares their commissions with the invitees. For now, the ambassador program only benefits the the inviters:
TippingsThe Tipping SourcesThe tipping does not come from your pocket, Publish0x sponsors your tipping and you can get a share of that tipping. In other words, both you the reader and the writer gets paid. Technically, you decide how Publish0x monetize the readers and the writers. Where does Publish0x gets its revenues?
Token VarietyPublish0x is crypto agnostic where they are open to accept any tokens. They do not have their own tokens which means they do not do initial coin offerring (ICO). At the time of this writting, the variety of tokens in their pools are:
The Tipping FunctionTipping LimitationFor WritersEarningsIt is Still FreeStill, be grateful because you get to publish for free. Like any other content platform publishing requires resource, not only they provide a platform to publish as much article as you want for free, you can also get the shares of their earnings. Remember back in the very old days where we have to compete to have our article in the newspaper. RulesAs ReadersAgain It is FreeAgain, be grateful that it is free and additionally you get pennies for reading. Remember how much you used to pay for buying books, magazines, and newspapers. Well, most online platform should remain free. Your VoiceSuggestions for DevelopersI have suggestions for improving the platform, so I humbly suggest:
Mirror
OutlineAboutLoopring is currently an Ethereum token. From https://loopring.org, Loopring allows anyone to build high-throughput, non-custodial, orderbook-based exchanges on Ethereum by leveraging Zero-Knowledge Proofs. Loopring is claimed to be secure, high throughput, and low cost. Loopring Exchange (Loopring.io) is the first decentralized trading platform built on top of Loopring 3.0. Loopring.io DEX Fees
First Time on DEXIf this is your first time on decentralized exchange (DEX), you should know that the concept of DEX is to retain the ownership on you which is different from the centralized exchange that you probably know that whenever you deposit coins to these exchanges, you give up control over these coins to them in order to trade and do other financing. The most important factor for these exchanges are trust different from DEX which is programmed which if I'm correct, it is using smart contracts. Since this is an Ethereum based DEX, you need:
Using Loopring.io DEXreferral programNormally, this is after registration but from my side I would like to share my referral link first. I'm a referral fanatic not only I enjoy marketing my referral link but using others' referral link to support the referral system. I was disappointed that I cannot find anyone on the web sharing their referral link. If you don't have anyone that refers you, I will be grateful if you use my referral link https://loopring.io/invite/1632. Connect and RegisterDepositBefore deposit, you need to have your coins in your wallet connect application and especially Ethereum for transaction fee. In the writing of this article, the DEX supports: TradeWithdrawOthersStakingAPIIf you are a programmer, you maybe interested in looking at the API. For example my friend is interested in making automatic trades (robots) based on his own indicator. |
Archives
August 2022
Categories
All
source code
old source code Get any amount of 0FP0EXP tokens to stop automatic JavaScript Mining or get 10 0FP0EXP tokens to remove this completely. get 30 0FP0EXP Token to remove this paypal donation. Get 40 0FP0EXP Token to remove this donation notification! get 20 0FP0EXP Token to remove my personal ADS. Get 50 0FP0EXP Token to remove my NFTS advertisements! |