With the release of other smart chains, yield farming becomes more affordable for average users. Although Binance Smart Chain was not the first to follow Ethereum's path, it was the spark for the new farming craziness because of cheap fees in average ¢10 per transaction. Still, do you know that using Harvest Finance is better than farming directly?
While in my first article about Harvest Finance, I wrote the difference between annual per rate (APR) and annual per yield (APY). To review it simply, APR is the interest generated annually while APY includes all the interest generated by investing previously generated interest. In traditional certificate of deposit, we may choose to claim interest montly and in APY we deposit those interest as well to earn more interests. While in my first post about Harvest Finance I covered the concept of APY but did not relate it to decentralized finance (DeFi) technicality yet.
Executing smart contract costs gas fees and the two gas fees that we definitely cannot avoid are approving contract and depositing assets to farms. Other gas fees can actually be avoided by automating the process, in other words include the process in a single smart contract and not separate. Let's take a look a three scenarios of an example that we want to reinvest our earned interest everyday:
The Legacy Process
Everyday costs $0.1 * 2 = $0.2 gas fee to reinvest. In a year, will cost $0.2 * 365 = $73 gas fees.
The Current Process
Today, almost every farming platform have the "compound" button where we no longer need to spend fee to harvest before reinvesting. Everyday costs $0.1 gas fee to reinvest. In a year, will cost $0.1 * 365 = $36.5 gas fees which reduces the costs by half.
Let us say the the smart contract auto harvest and reinvest our interest daily. Therefore, no need to spend gas fees which reduces the costs to zero. This auto compounding the basic feature that is available in Harvest Finance though I have not read yet every when the smart contract harvest and reinvest.
If I am not wrong or assuming this is the case in this article as it does change the logic of the explanation, let us say that highest amount CAKE token are given to those who provided liquidity between BNB and CAKE with APR at the time of this writing is 82% * 40 in Pancake Swap. After this exists 3 approaches:
Compound To CAKE Pool
Doing this cost $0.2 gas fee for reinvesting the CAKE into CAKE pool of 92% APR.
Compound To BNB-CAKE Pool
Doing this cost two times the gas fee $0.4 than the more simple staking the harvested CAKE into CAKE pool but note that this strategy can be more profitable because BNB-CAKE pool has an APR of 82% * 40 while CAKE POOL only has 92% APR.
In my second article about Harvest Finance, I emphasized the value of assets. Generally we want to farm an asset that is stable or better goes up in value. What I did not write is that the APR or APY dynamically changes when the asset value changes if the token emission rate does not change. A token that spikes up in value will also cause a spike in the APR and APY indicator.
Why Harvest Finance is Better
Harvest Finance provides auto compounding and high yield strategies. Not only that, we are also rewarded with FARM tokens for using Harvest Finance. Look forward as Harvest Finance is always researching better strategies.
Finally Understand The Significance On Ethereum Farms
I never farmed on Ethereum because it is expensive and only knew the process theoretically. With many cheaper smart chains today, I finally started farming myself and learned the details that I missed when I was only reading articles. In Binance Smart Chain, Harvest Finance saves us around a hundred dollar assuming auto compounding everyday. Now let us go back to the old days why Harvest Finance was very valuable. In Binance Smart Chain, the average gas fee is $0.1 while in Ethereum the average gas fee is $50. How about if we change the narrative of this article to Ethereum?
Would you look at that, in Ethereum Harvest Finance potentially saves tens of thousands of dollars or the above strategy was never possible manually where farmers compound not daily but monthly or even longer and with Harvest Finance, higher APY was possible because of fee eliminations.
This article was first published on Publish0x and mirrored below:
If you know anymore, please leave them in the comment section.
Disclaimer: this is only a list and not financial advice to jump in any of them. Many can be opportunities but do your own research (DYOR) because price can dump, rug pull, or there can be vulnerabilities in their codes. Although most of them are audited by Certik but do read the audit results. Leave a comment if you know more or if you have a comment about any of these decentralized exchanges.
What Many People Ignore in Staking and Yield Farming
Back in the old days whenever someone said HODL my reply was why not just stake instead and earn some interest? That is true if the coin we stake will go up in value. However, many people did not think of what happen if the coin value go down? For example, you have 100 coins worth $1 each ($100 worth) with interest annual percentage rate (APR) 30% a year:
Issue With Many New Players
Now, the problem is the new players in crypto. For old timer like us, we are used to do our own research (DYOR) such as check the fundamentals, check the sentiments, and finally check the technical analysis of a coin. If there is potential, then buy the coin. Majority of new players probably does not bother doing there own research and even if they did, we still wonder if they are able to handle the volatility of the market. 10% - 20% up and down a day is normal in crypto but those new players most likely will have a heart attack after seeing their investment (or gamble) down by 10% the next day and panic sell. The day later, they cry because their gamble that they sold went up by 50%. When I became known to be in crypto, most of the people who came to me want to give me money and use them to invest in crypto on their behalf.
The Safe Plan I Found
Therefore, for these people, I search some plans of low risk investments in crypto and one of the answer I found is yield farming using stable fiat coin. Yield farming in crypto is providing liquidity and get rewarded in fees plus some tokens. It is called farming because the coins we plant generates crops. For those of you who still do not understand, just think of it as a certificate of deposit (COD) that generates interest in another currency for example we deposit Dollar and we get interest in Yuan. Like staking, yield farming is not profitable if the interest does not cover the loss of the asset's value (the coin goes down in price). However, there are farms now where we do not need to risk our investment in volatile coins but use fiat stable coins as the seeds to grow some yields.
Yield farming started during the decentralized finance (DeFi) craze in 2020. For example, we can just supply some dollar stable coins such USDC and DAI and earn interest plus farming their COMP token. Why did I not start back then but only now? The fees on Ethereum became crazily expensive for for average people. Imagine paying $50 to deposit then another $50 to withdraw. This is because of Ethereum scalability issue with only a dozens of transaction per second because they are focusing more on decentralization and security. The more users came, the more users wait and they do not like waiting so what do they do? Rich people who farms hundred thousands of dollars, millions of dollars, are willing to pay thousands of dollars of fee. A thousand dollar is like a snack to them. What about average people like us? We are only willing the pay a few dollars and how long must me wait until a miner serve us? Maybe forever because there are always people willing to pay more?
Good news for average people like us this year there are Layer 2 Ethereums and other alternatives such as Binance Smart Chain, Avalanche, and Wan Chain that ranges the fee from almost 0 to most expensive as 20 cents. We can profit in farming with only just a hundred dollar. Other than that, what are these stable coin farming are useful for?
Before I continue, while the calculator shows no risk because it is USD generating interest and yield but becareful with the platform because malicious platform can steal our money. For now top well known platforms such as Venus Protocol, Pancake Swap, and Ape Swap are reliable and maybe some other audited platforms as well such as Conveyor Belt. Other than that, becareful of other platform for example Turtle DEX was said to be rug pulled where people putting their money there lost them.
Quick Swap when $150/QUICK
Cometh Swap when $199/MUST
Wan Swap when $0.379/WASP
Venus Protocol when $79.63/XVS
Pancake Swap when $17.5/CAKE
Ape Swap when $1/BANANA
Bakery Swap when $1/BAKE
Conveyor Belt when $100/BELT
Pancake Bunny when $264/BUNNY
JulSwap when $0.15/JULD
Hyperjump when $1.2/ALLOY
Kebab Finance when $2.25/KEBAB
Spartan Protocol when $1.22/SPARTAN
DODO when $3.7/DODO
Swamp when $124/SWAMP
UBU Finance when $0.38/UBU
Goose Finance when $26.159/EGG
Salt Swap when $0.369/SALT
Slime Finance when $3.304/SLIME
Blue Finance when $4/BLUE
Many Swap when $0.28/MANY
Thunder Swap when $4.77/THUNDER
More Platforms Will Definitely Come
Leave a comment if you know more platforms and I will include them in the next post.
Presearch for New Users
Presearch is search engine like Google, Yahoo, and Bing with difference that it is decentralized and powered by cryptocurrency technology. For new users, it is enough to just know the following information:
Interested in More?
Interested in Investing?
Unless you are planning to invest or to develop, you probably do not need anymore information except that Presearch is to be a decentralized search engine governed by the community. There is no harm in using it and even more you are rewarded with PRE tokens for using it. If you want to invest which is buying the PRE tokens, then you need dig deeper.
My Opinion on Their Integrity
Presearch started in 2017 with one man, then two, then three, the next year became 12, and finally became many (cited from the beginning of their whitepaper). Not only that they survived for 4 years but they also showed growth, they have actual product and thus my opinion is that they are here to stay. Their team web page have their Linkedin accounts showed which is somewhat more trusted accounts in the career ecosystem. While that alone is not enough to put our trust but here some additional information that I found from D&B Business Directory which may help with our investigation:
As a Developer
Their engine is not yet open source but they will be they said and they have to be soon if they really plan for further decentralization. For now, there are two codes worth looking at on their Github which are Presearch Packages and the PRE Token Solidity Smart Contract along with its Audit Report. Through Presearch Packages, we can contribute more rich search results for example math package triggered when someone searches for 5 * 10, currency package triggered. when someone searches for two currencies (500 cad to usd), and color picker triggered when someone searches for color picker. For the deployed smart contract is best to see through Etherscan and we can see the holders of the token as well.
Their first step of decentralization is that they allowed us to run nodes with limited functions now. While nodes on giant private corporations like Google are owned solely by them, Presearch now allows anyone to run a node and even rewarded with PRE tokens for the work. When a user search, the user will connect the best node based on latency etc to tell the node to retrive search results. In short, we can start participating in decentralizing Presearch but not all functions are decentralized now but they claimed to be releasing the decentralization in phases.
The naratives alone which are decentralized search engine powered by crypto technology is already enough for people with their pockets filled want a sense of ownership on Presearch by buying its token. If we ask random young investors who invested in Apple and Microsoft stocks, probably most of them does not know about dividends, their right to participate in votings, and the perks they may have if they become a major share holder. Most would answer, it's just cool owning those stocks and the sense of fulfiness of being part of something big. Today, Preseaarch not only survived for 4 years but also have a well documented roadmap.
However, that is not the only thing about PRE token. PRE token have utilities. Advertisement showing in recent search engines are based on auction of keywords. The highest bidder gets their ads shown. But the problem with centralized search engine is that the auction is controlled by one entity. No matter how transparent they are, they can never beat the transparency of a machine running an algorithm. Thus, the auction by presearch is currently the most tranparent since the method is staking the PRE tokens on certain keywords and let the machine calculate who has the highest stake and then that staker's ads will be shown on the top. Ideally, the difference between centralized search engine is the custody. In centralized search engine, we deposit our funds to them and unknowingly that we are giving control to them and beg them to place the ads for us. For decentralized search engines, the custody of our coins or tokens should remain with us and we stake our coins to the smart contract to display advertisement and we can unstake anytime and ofcourse we still hold the private key thus ownership is still fully ours even when staking. Unfortunately, Presearch is not at that level yet where the method is still deposit and withdraw in an account. Anyway, the PRE tokens can be used to advertise, to disfavor other advertisements, or to make an ad free experience for users. The process of staking are:
Though we must not forget the very basis of this fundamental that advertisements are useless if there are no users or viewers. All the people I met physically does not use Presearch. If they are not there, how will advertisers be attracted to spend their money to put advertisements? Although the search rewards, the node rewards, and all other basically free token giveaways have negative impact to the price, they are also ways to attract more users, so it is two way or a cycle. Therefore, the front end of the search engine is very important for example, I prefer to use Presearch when writing a cryptocurrency name as a widget is immediately shown from coingecko the price analysis of the coin but when searching for "time in UTC", I prefer to use Google as it shows immediate the time in UTC.
If I did my research in the beginning of this year, I would have seen the price as very undervalued and I would have dared all in. However, I was too late now that the priced already pumped. Although based on the fundamentals the it can potentially greatly increases. However, Google still one sided dominates the search engine ecosystem based on the number of users which I think is not enough to have big players place advertisements on Presearch. With the price already pumping in this condition, the price will highly likely drop if the bull market ends and when there is no significant growth. While Google is only the cause of stagnancy, the only threat for Presearch is only another decentralized search engine, especially where we can just use Metamask to stake and unstake. For now, my strategy is to DCA $5 - $10 in every consolidation in Kucoin.
This month I received many Netbox referral rewards unlike other months which are purely activities so thanks for using my link! Currently the quantity is too much to handle when the value I earned is not much. So I may report this on a separate article.
March 2021 Income ≈ $121
Like last month, the hundred dollar is probably due to the bull market and not due to my hard work because this month I wrote far less than previous months. Usually, I needed to write at least once a day to reach $100. Well this does not apply to everyone. My articles are not top grade articles which is why posting once a day is needed to reach $100. Top articles probably can make hundreds of dollars.
Eventhough I wrote much less but I was very happy this month as I added more professions into my list. I officially added crypto gem hunting and yield farming into my daily routine. Sometimes, I was lucky to take some arbitrage trading opportunities. I was very stressed because I did not achieve my target of greatly multiplying my portfolio because I wanted to use them as collateral to my families and friends to start my personal startup or at least my independent job. Collateral here means that they will not bother me anymore to find an official employment. I did not reach that goal, but I should be grateful that I profit another 100 months of average salary here this month.
However, this month I have made my determination to charge through the walls despite not reaching my personal goal which was to win 1000 months worth of average salary. If you read my last month report, I already have the intention last month which is to go full with my independent jobs meaning that I will not seek employment in any company. This month, I intend to fully realize that where for now my independent professions are blogging, gem hunting, yield farming, and arbitrage trading. In the future, I want to do more content creating like Youtube and NFTs, build applications related to cryptocurrency portfolio, writing novels, and hopefully I can build a startup and gather teams for it.
Personally, I enjoyed being a full time independent content creator very much and I once again thank the platforms, investors, donators, and viewers for making my venture possible through donations, tippings, and upvotes. If you enjoy and/or want to further support my work you may choose more form of donation:
While I was backuping data, I found documents that I written between after my college graduation and before I continued study abroad. So I might as well post them and write an article about them. Basically, I was stressed because I was pushed without rest to find a job, then I had no choice but to take a stressing job, and finally resigned because I got accepted in studying abroad.
Insanely Applied to Valve
I was a gamer back then especially when my friends were also gamers. The games that we played back then are mostly from Steam. On their website, the job menu was always there and back then was emphasized. Their book about their job was also open electronically. The most interesting thing that I remembered that their company's structure was dynamic. Everyone can come up with a project and the structure will be made after the approval of the project and once the project finishes, the structure changes based on the next project. As I was desperate back then, I did not think twice to apply, and what is the disadvantage of just applying. Here is my cover letter when I was applying:
Ofcourse I was mad to even apply. Who would want a fresh graduate residing in another country who does not have any special talent? The expenses would far outnumbered the benefit of them hiring me. Yes, without any details, they just rejected me. At the same time I applied to Blizzard as well and the same result. In this period, I also applied to Smartfren Telecommunication Service Provider because there were news about its open recruitment in the newspaper but also does not sit well for me since they were looking for experts in telecommunication network while I only have experience in computer networks.
Well Known Scholarships Back Then
I enjoyed my college studies and felt more fun doing learning and researching and therefore even before I graduate, I already planned to continue to graduate school and get the highest degree while enjoying studying. My other agenda is to travel abroad and be in an international environment. If you read my post about deals going to school, you have probably read of how much I despise student loans because unless you are able steer the wheels, you will probably end up a slave struggling to repay your debts. On that post I also wrote that the best way to continue studying is to get a scholarship. In this period, I was applying to many scholarships and the globally popular ones back then are:
Yes there are more scholarships but back then they are not generally know, at least in my ears. You can searching for scholarships in each country for example try the search terms "scholarship in China", "scholarship in Russia", "scholarship in Germany", "scholarship in Netherland", etc. Generally, the requirements are:
For graduate school in my experience is that other requirments are enough with just the bare minimum. What is most important is the research proposal because in reflects out plan for the whole study, what will we do there, and what will we contribute for them. Graduate school is different from the spoon feeding of primary and high school, it is different from college where we are to observe and learn where those who did well in exams are prefered as these people can be taught much more easily. In graduate school, they are not interested in teaching us but they look forward to our contribution to their research and if possible for us to lead.
The following cover letter is the one that I submitted to Erasmus Mundus:
I was rejected for Erasmus Mundus scholarship. With my experience now, looking back at the cover letter that I wrote it contained my motivation to study and my agenda of being in an international environment. You don't study information communication technology (ICT) in graduate school. You study that in undergraduate school or anyone else can study by themselves. While recruiting students from abroad does fulfill the purpose of making a more international environment but is never the primary purpose. The primary purpose of graduate school is to find students who can contribute where in this case I lack proposal in my cover letter.
The following I submitted to Fullbright and Australian Awards:
A slight improvement adding my experiences and more details about my abilities which is one of the information that we must show when applying for a job. However again, there was no proposal here of what I could contribute to and just showing what I could do. Then, I got rejected by Australian Awards and did not pass the interview for Fullbright. My time was running out and my surroundings are impatiend of my unemployed status, thus I was forced take a job no matter what it is.
My First Stressing Employment
Why do I dare to write "stressing" in the heading? Because the higher ups said so that stress and overtime is the norm there and employees running away breaching the contract are many and if I want to escape as well go ahead. They pride themselves in a metaphore of school that if we last until the end of the contract, we will be proud graduates no matter how bad we performed. After that, employees move on or rare cases of continue working there.
This period was the first time in my life where I hated Sunday night very much and liked Friday noon very much. Why? because the haunting that Sunday night will last and have to wake up at 5 tomorrow to prepare going to work and the best days of back then was Friday night where I kept myself sane with movie cinema entertainment plus tasty foods and tasty drinks where usually day and night I eat at the cafeteria and the thought that I can sleep as much as I want and no need to worry about tomorrow Saturday where I do not have to worry about work. While it is true that I did not like Mondays and liked Saturdays in my primary and high school times but not as far as hating them like in these working days.
Wearing uniforms and working days from 08:00 - 20:30 is still something bearable as I was used to work hard volunteerily. However, those things became a great nightmare when my job is something not only I had zero experienced in but also zero knowledge in. In the interview, I explained clearly that I was a computer and network engineer but do you know what they put me into? Manufacture Engineer. It had been since high school that I was humiliated as someone stupid. It never happened in my college life because I pride myself as a strategist, planner, and often daydreamed of grand schemes so I am good a calculating my abilities. I was great, not because I was really great but I only took jobs that could do and I could even forsee how long I would take. When someone asked me fpr a complicated job for example, I would say that I needed 2 to 3 weeks for example, and if they do not want take the risk, I even dared to say that it is better to ask someone else. In this employment case was because I forced to since I was doubting the future result of my scholarship applications and was unclear with my plans. Thus, pushed me to this employment in an unfamiliar area where it should not even be like this that my original intention was only an intership where I would even pay so they would have understand my intention was only to study and gain experience and not have a high expectation. However, why an intership, just get employed with contract directly they said. They do not understand that an employment meant that I have to do well in the job and contribute much to the company where in my case would potentially lead to a disappointment. Wait, couldn't I just explained my background whenever I cannot do something? Would you care if you see a stupid employee? I may care but most people would not care and just scorn that employee for being stupid. Even if I initiatively explained my background, people probably are too lazy to listen. The reply most of the time, "if you know that, why did you even accept this job?". That's right, why did even accept that job? Because I was wavering, full of doubts, and unclear about my plans where that employment was my escape from the humiliation of unemployment. Ironically, it was an escape from one prison to another prison.
Despite my complains from the public perspective, I was blessed. Other than the rich knowledge and experience, I was given high salary compared to other fresh graduates, group housing provided with all the bills paid, transportation to my work place, free lunch, and free dinner if I stayed over time. Plus, my salary can be doubled if I worked overtime everyday where overtime was the norm so my salary was doubled in default. The everyday scolding and yelling are for the lazy and the fearful. Hard work are appreciated. Employees called it a hell hole because they fear their bosses. I don't, I have no hesitation when facing them, I tell whatever the situation truthfully, and if they don't like it, I don't care, and if they push the button, I'm not afraid to bite back, but what can I do in a work I have no knowledge in except for just listening to their anger? On the other hand if you are competent or good at what you do, you will dominate and lead instead. I saw the bosses relying on peers who have background, who were very good at manufacture engineering, and had passion in them. Me? I was an easy target for the first three months, and after that I knew at least the basic stuffs which is enough to not let them push me around as I covered my lack of brilliance with hard work. My seniors often afraid confronting my bosses, I did not understand the meaning of fear which is enough for to dominate after the 3 months.
Compared to my collegues on the same year as me, they are still under mentorship of their seniors while my mentor on taught me for about 2 weeks and released me to wild by myself since he was to busy to teach me. That suits me well, I went here and there, met people here and there, asked the appropriate people for the appropriate problems, and I volunteered to take the general manager's extra lessons and projects during every overtime after 17:00 - 20:30 if I did not have any main jobs at the time. Still, I did not want to stay there long as back then was not inline with my study which was computer network, server, and security at the time. In my workplace from 07:30 - 20:30 and 30 minutes each on the road, I only had 21:00 - 22:00 and Saturdays and Sundays for my own agenda. I was richer than most fresh graduates in my country, but what is the use of money if I do not spend them and there was no way I had time to start learning business and investing.
Light at The End of The Tunnel
After three months I already accept my fate that I had to endure for 2 years, however a surprising news arrived at the fifth month. I read again and again, and read again and again that I could not believe I was accepted for MEXT scholarship in Japan! This was the month I worked the hardest because I was to happy that the light appeared at the end of the tunnel. Other than the previous cover letters, the followings are addition that added when I submitted to MEXT scholarship:
Unlike before, I wrote what I planned to do and additionally I boasted big that I want to be on the world's summit one day. While I did wrote some sort of plan but the me now evaluated the writing was due to some sort of luck. The proposal lacks details like what kind of securities, why securities are important where I should have wrote a background story and if there was none, I should have wrote a future possibility or potential problem, and lastly the methods I are missing in my proposal. If not a method, at least an idea that can somewhat complete the story of my proposal. Maybe I am right that what I written above was not enough but there was an email interview and here is an important part that I wrote:
When I said even before I graduated I wanted to continue to graduated, I already wanted as far as to do PhD and I mentioned that in my written interview. While at that time it was just a mention and a desire, when I made to Japan, that mention was actually one of the determining factors as I reached the lab that I was already given more research to do because they knew that I wanted to continue to PhD they said while other Master's student are having it more leisure. I also mentioned that after I finish everything that I wanted to participate more in international events. I felt bad that my priorities changed today but it is still excuseable since we are still in COVID-19 Pandemic. At that time, I already gave up and leave everything to fate, so I might as well go with a bang in my final letter. I wrote in the future that I wanted to unify the world where we can go anywhere at anytime. The idea is actually a courtesy from a cinematic video game developed by Kojima Production Metal Gear Solid 4 Guns of The Patriot led by Hideo Kojima. The original story is that a man almost succesfully unified the world from the shadows where there are no longer any borders where we are just creatures walking on this world with no races and no nationality, we are a singularity but also a society as a whole. Now that I think about, maybe that statement was also a determining factor.
Anyway, as I was accepted, I worked the best that I could before I resigned. Though I hated the working system, I was grateful to the friends I made. Like in many places, they are pleasant people who helped me and took me out to have fun. I made to Japan and continue study in graduate school. I find it funny for some peers of mine are stressed from the study. For me, this is heaven compared to the 08:00 - 20:30 in uniforms and I forgot to mention that employees below managers are not allowed to bring their own computer devices and also not allowed to use the Internet. Even further, we cannot plugged our USB to transfer data, I even have to hack the Trend USB software. Well, I heard that some of my peers here are used to leisure work like playing games and watching Korean Drama movie many times because there are less work and most of them missed their family. While for me, I enjoyed the class, I enjoyed the research, I enjoyed the free electricity and very fast Internet connection, especially very thankful for the scholarship, and finally the first time in my life, I lived freely and independent. Aside from the classes and every monday to present my progress report, I was free to wake and sleep whenever I wanted, to go weherever I wanted, to eat whatever I wanted, and to meet whomever I wanted. Basically, I was free to make my own schedule where at the point when I did not have any classes anymore, I research and got indulged in entertainment depending on my mood until 5 in the morning, and then sleep until 2 in the afternoon and my schedule was constantly changing however I wanted and they did not complain because I deliver my progress report on time and frequently exceeded their expectation. Other than that, I have money to invest! Those were the start of my happiest time in my lifetime.
This is one of my Doctoral assignment from Advanced Computer Architecture II Course which has never been published anywhere and I, as the author and copyright holder, license this assignment customized CC-BY-SA where anyone can share, copy, republish, and sell on condition to state my name as the author and notify that the original and open version available here.
Peripheral interface controller (PIC) is a family of microcontrollers made by Microchip Technology. A microcontroller is a one chip computer that include microprocessors, memories, and peripherals. PIC devices are popular with both industrial developers and hobbyists due to their low cost, wide availability, large user base, extensive collection of application notes, availability of low cost or free development tools, serial programming, and re-programmable Flash-memory capability. They can be programmed to be timers, to control a production line, to control light and sound intensity by involving few sensors, and to perform other kind of tasks. The PIC microcontroller have five basic instruction cycle which are fetch, decode, execute, memory, and write (FDEMW) . 
2. Verilog HDL Design
On the verilog hardware description language (HDL) design is based on Figure 1. This sections starts by constructing the arithmetic logic unit (ALU), bitmask, and W register. Then continue to design the program counter and return stack which its values to be sent to the instruction register where there is also decode and control behavior. Next is the design of special register, although the effective addressing is discussed in early part. After that the built module have to be connected to the firstly created ALU, bitmask, and W register. Lastly implement sleep and tristate buffer.
2.1 Arithmetic Logic Unit
Code 1. Input and output of ALU
Code 2. Bitmask
Code 3. Up to add and sub
The verilog design of the ALU is based on the diagram on Figure 2. The input, output, and process looks clear which was implemented on Code 1, however the detail operation within the bitmask, ALU, and w register should be examined on Code 2, Code 3, and Code 4. The ALU operates between the value on the W register and the current input FI. For addition and subtraction, Code 3 should follow the diagram on Figure 3, while for other operations are not as complicated which is on Code 4. After that, the output can be written on Code 5. The operation definitions are available on Code 6 which the bit opcode from 2nd to 6th from left to right is used.
Code 4. Other Operations
Code 5. Output, W Register, and Flags
Code 6. ALU Operation Definition
2.2 Core Input, Output, and Register
Code 7. Input, Output, and Register for Core Module
2.3 Effective Addressing
Based on Figure 5, Code 8 should write to RP if direct addressing, otherwise if indirect addressing IRP should be FSR.
Code 8. Affective addressing for core module
2.4 Program Counter and Return Stack
For Code 9 about program counter and return stack, the value of PC is based the left diagram of Figure 6. When operation call, then the stack is pushed, and when operation return, then the stack is popped. The value of STKP should be based on the right diagram of Figure 6.
Code 9. Program counter and return stack
2.5 Instruction Memory and Register
Code 10. Instruction memory and Register
2.6 Decode and Control
To write Code 11, the instruction table and instruction details on the datasheet  should be referred. Code 11 is written starting from first two bits of the instructions then the next 4 bits of the instructions. Refer again to the datasheet  of which status are affected. Unfortunately, sleep here is a repeat NOP.
Code 11. Decode and control
2.7 Special Register
Code 12 about special register is based on Figure 3 memory map for the written bits and Figure 8 about special register itself for its values.
Code 12. Special register
Code 13. Data RAM
2.8 Data Path
Code 14. Data selector for ALU
2.9 ALU Initiate
Code 15. ALU initiate code
Back on Code 11, sleep is a repeat NOP. Here on Figure 9, waking up from sleep is not implemented, sleep forever but can be reset.
Code 16. Sleep implementation
2.11 Tristate Buffer
Code 17. Tristate buffer implementation
All the codes to conduct the simulation are available online . For solely testing the ALU, follow Figure 13 which are about generating the clock, and testing operations starting from PASSF, subtraction, until bit test. Figure 13 compiles the test sequence from text format into verilog HDL format using make_vector.pl binary. Then these files including Code 1-6 is compiled using verilog binary. The waves can be examined using simvision which can be shown on Figure 14. All the wave values are shown in hexadecimals. CB shows the executed operation. It is seen the W register becomes 1 when performed an increment operation, and reduced to 0 when subtract operation was performed, note that HC and CO has started to become affected. After that is logical operation where the result can be seen on FO as well. In the ends of this simulation is where the bit manipulation operations are performed where the B and bitmask variables are affected.
Figure 15 shows the diagram of testing the PIC16 core. The program.asm shows that only 10 operations are tested . Next it have to be converted into an assembly file using gpasm which then the format have to be converted. After that the PIC16 core whole files  can be compiled using verilog and the waves can be seen using simvision on Figure 16. The first part of the test should bitset the RP, clear W, set TRISB to 00h, bitclear RP. The next operations are to do ten times addition of ten. DData, RData, and WData should look consistent. First the value should be 0A which is hexadecimal of 10, then it should increase to 1B and everytime added by 10. Note that the decrements are also shown from 0A until 01. In the end the result is 37 and will be transferred to PORTB. The last operation is sleep. Note that the design on this report does not implement everything from the original as shown on Table 1.
Table 1. Original PIC16 versus this report’s design
The verilog HDL codes can be implemented in FPGA. On this report Nexys4 DDR board is used on Figure 17 and Vivado software is used to synthesize the code. The LED should show 110111(2) which is 37 that is the result of the addition.
This is one of my Doctoral assignment from Current Science and Technology in Japan Course which has never been published anywhere and I, as the author and copyright holder, license this assignment customized CC-BY-SA where anyone can share, copy, republish, and sell on condition to state my name as the author and notify that the original and open version available here.
Pipelining for microprocessor
A microprocessor is an electronic component that is used by a computer to do its work. It is a central processing unit (CPU) on a single integrated circuit (IC) chip containing millions of very small components including transistors, resistors, and diodes that work together . The traditional microprocessor is too simple, but it is good to be explained in the class. The traditional one have five instructions which in order are fetch, decode, execute, memory, and write. In Figure 1 is seen that the program counter accesses the instruction memory, then the register fetch the instruction, next the instruction is decoded by the decoder, later it is sent to the arithmetic logic unit (ALU) and execute the instruction, finally the result is stored in the data memory and written into the register .
Pipelining is a technique to speed up the processing. Without pipelining the processor will have to wait until the whole 5 steps finishes before it can execute a new one, in other words serial processing. However pipelining allows the processor to start processing the next instruction without waiting until the previous instruction processing is finished, in other words parallel processing. Figure 2 showed the simplest illustration, but the technique have grown vast, for examples there are parallel operation, superscalar, super pipelining, and very long instruction word (VLIW). There are also data dependency problems such as flow, control, anti, intput, and output that prevents performance improvement. There are some techniques such as data forwarding, and dynamic code scheduling. 
GPGPU and CUDA
GPGPU was originally graphic processing unit (GPU) which is to process graphics, whether they are 2D (dimension) or 3D, still picture or moving picture (movie), and there are also animations and games, not to forget that the monitor is refreshing around sixty times a second. Just to process a single graphic took a lot of math or algorithms which is very heavy for the CPU which is why back then engineers created GPU solely to handle graphics. Today an innovation was made that GPU can be used for general purpose, and now comes the term general purpose graphic processing unit (GPGPU). 
Respectively the current intel i7 processor have 4 processing units (cores) while a GPGPU can have hundreds of cores, the current Geforce GTX 1080 Ti have 3584 cores. Figure 3 showed an illustration of CPU versus GPU. The essence of GPGPU is parallel processing (most people agree back when it was a GPU, it was used to parallel process the pixels in graphics). Nvidia created CUDA which is a parallel computing platform and programming model that allows to utilize their GPUs into GPGPUs. The simplest example is performing a loop program for example a hundred loops. In a CPU the loops is processed in serial order from one to a hundred, while in GPGPU the hundred loops are processed in an instant (depending on the number of cores). The processing speed greatly increases, I have example codes in C, C++, python, octave, and R which compares running in CPU and GPGPU using CUDA and OpenACC . However GPGPU can only run non-specialized processes, which why CPU is still needed. The theory is long, but simply the process is first defined in the CPU, then the CPU divides the process to the cores in GPGPU, lastly the GPGPU returns the result to the CPU.
Openmp stands for open multiple-processing, it is an application program interface (API) that provided parallelism in shared memory through the implementation of multi-threading. Since it is in shared memory, openmp is only utilizable on multi-core CPUs, where shared memory is a certain space on the memory to be shared by other cores on the CPU. The other keyword is thread, where it is the smallest unit of a process which can be scheduled. A process can be single thread or multi-thread. Openmp allows the threads to run in parallel speeding the process and optimizing the use of resources. The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables. Openmp supports C, C++, and fortran. Back on my coding in openacc , the pragma directive “acc” can be changed to “omp” to use openmp in the code. 
Opencl stands for open computing language. Although mostly today people elevate it for its parallelism, opencl is not just parallelism but an open standard computing language which supports heterogeneous or many kinds of systems such as CPUs, GPUs, digital signal processors (DSPs), mobiles, and even FPGAs. There are many more devices that it supports, unlike CUDA which only supports Nvidia devices. Although CUDA is great for personal projects because it has more libraries and better programming interface, but if the product is to be commercialized, then opencl is preferable because it supports many devices. Also if purely want to make the code as open as possible, opencl is best because of its compatibility. The code that was made can be portable to all sort of devices. The coding of opencl is based on C99, but today it also supports C++11. The concept parallelism is almost the same as CUDA, openmp, and openacc. 
This is one of my Doctoral assignment from Current Science and Technology in Japan Course which has never been published anywhere and I, as the author and copyright holder, license this assignment customized CC-BY-SA where anyone can share, copy, republish, and sell on condition to state my name as the author and notify that the original and open version available here.
Graphene is a single layer of carbon atoms arranged in interconnected hexagonal lattice. It attracts the attention of lots of researchers, some calls it a wonder material, a miracle substance, or a substance that made people confused that it is substance that is only found in a comic book, all due to its amazing properties. It is one atom thick, conducts electricity better than silver, it conducts heat better than diamond, it is stronger than steel, it is lighter than feather, it is transparent, and it is bendable. Examples application possibilities are can replace a silicon transistors to graphene transistors in computers which can raise the frequency ten times from 100 to 1000 gigahertz, it can be used to make an unbreakable device screen, and as a better material for water disalination. Graphene was founded in 2004 by Andre Geim and Konstantin Novoselov from University of Manchester. They used a simple method using scotch tape to peel off graphite (the lead of a pencil) or stacks of graphene sheets into a single graphene sheet. Although graphene have huge amazing properties, it is still a future material because it is very difficult to produce and very expensive for mass production.   
Since graphene is very difficult and expensive to produce, researchers divides their attention to the graphene’s derivatives. Though its derivatives have less amazing properties, the properties can be tuned by going through certain processes. One of its derivatives is graphene oxide is a single-atomic layered material, made by the powerful oxidation of graphite. It can be said as an oxidized form of graphene laced with oxigen containing groups. It is considered easy to process since it is dispersible in water (and other solvents), and it can even be used to make graphene. It is commonly sold in powder form, dispersed, or as a coating on substrates. There are four basic methods of synthesizing graphene oxide which are Staudenmaier, Hofmann, Brodie and Hummers.