How to Invest In Bitcoin: A Step-By-Step Guide Money
How to Invest In Bitcoin: A Step-By-Step Guide Money
Bitcoin Exchange Cryptocurrency Exchange Binance
Bitcoin On Automatic - How To Steps To Make Bitcoin on ...
How do I easily buy and sell Bitcoin? Get Started ...
Bitcoini.com - Bitcoin Wiki
Bull Bitcoin’s Dollar-Cost Averaging tool for Canadians: a detailed overview
Hello fellow Canadian Bitcoiners! I'm Francis Pouliot, CEO and founder of Bull Bitcoin (previously known as Bitcoin Outlet) and Bylls. I haven't been active on Reddit for a while but I thought I'd pop back here to let the community know about our new dollar-cost averaging feature, "Recurring Buy" This post is a copy of my most recent medium article which you can read here if you want to see the screenshots. https://medium.com/bull-bitcoin/bull-bitcoins-dollar-cost-averaging-tool-for-canadians-the-right-time-to-buy-bitcoin-is-every-day-82a992ca22c1 Thanks in advance for any feedback and suggestions! [Post starts here] The Bull Bitcoin team is constantly trying to reduce the frictions ordinary people face when investing in Bitcoin and propose innovative features which ensure our users follow Bitcoin best practices and minimize their risks. We are particularly excited and proud about our latest feature: an automated Bitcoin dollar-cost averaging tool which we dubbed “Recurring Buy”. The Recurring Buy feature lets Bull Bitcoin users create an automated schedule that will buy Bitcoin every day using the funds in their account balance and send the Bitcoin directly to their Bitcoin wallet straight away. We put a lot of thought in the implementation details and striking the right trade-offs for a simple and elegant solution. Our hope is that it will become a standard other Bitcoin exchanges will emulate for the benefit of their users. This standard will certainly evolve over time as we accumulate feedback and operational experience. In this article, I cover: The problem that we are trying to solve Recurring Buy feature details, processes and instructions The rationale (and tradeoffs) behind the main feature design choices Bull Bitcoin is only available to Canadians, but non-Canadians that wish to have a look at how it works are welcome to make a Bull Bitcoin account and check out how it works here. You will be able to go through the process of create the schedule for testing purposes, but you wont be able to fund your account and actually purchase Bitcoin. What problems does Dollar-Cost Averaging solve? The most common concern of Bitcoin investors is, not surprisingly, “when is the right time to buy Bitcoin?”. Bitcoin is indeed a very volatile asset. A quick glance at a Bitcoin price chart shows there are without a doubt “worse times” and “better times” to invest in Bitcoin. But is that the same as the “right” time? Gurus, analysts and journalists continuously offer their theories explaining what affects the Bitcoin price, supported by fancy trading charts and geopolitical analysis, further reinforcing the false notion that it is possible to predict the price of Bitcoin. Newbies are constantly bombarded with mainstream media headlines of spectacular gains and devastating losses. For some, this grows into an irresistible temptation to get rich quick. Others become crippled with the fear of becoming “the sucker” on which early adopters dump their bags. Veterans are haunted by past Bitcoin purchases which were quickly followed by a crash in the price. “I should have waited to buy the dip…” Many Bitcoin veterans and long-term investors often shrug off the question of when is the right time to buy with the philosophy: “just hodl”. But even those holding until their death will recognize that buying more Bitcoin for the same price is a better outcome. Given the very high daily volatility of Bitcoin, a hodler can find himself in many years having significantly less wealth just because he once bought Bitcoin on a Monday instead of a Wednesday. His options are either to leave it up to chance or make an attempt to “time the market” and “buy the dip”, which can turn into a stressful trading obsession, irrational decisions (which have a negative impact on budget, income and expenses) and severe psychological trauma. In addition, trying to “buy the dip” is often synonymous to keeping large amounts of fiat on an exchange to be ready for “when the time comes”. There must be a better way. Bitcoin investors should be rewarded for having understood Bitcoin’s long-term value proposition early on, for having taken the risk to invest accordingly and for having followed best practices. Not for being lucky. Overview of features and rules In this section I go into every detail of the Recurring Buy feature. In the following section, I focus on explaining why we chose this particular user experience. The user first decides his target investment amount. Ideally, this is a monthly budget or yearly budget he allocates to investing in Bitcoin based on his projected income and expenses. The user then chooses either the duration of the Recurring Buy schedule or the daily purchase amount. The longer the better. The frequency is each day and cannot be modified. The user must submit a Bitcoin address before activating a Recurring Buy schedule. By default, every transaction will be sent to that Bitcoin address. It’s the fallback address in case they don’t provide multiple addresses later. Once the user has filled the form with target amount, the duration and the Bitcoin address, he can activate the Recurring Buy Schedule. The user is not required to already have funds in his account balance to activate the schedule. We will randomly select a time of day at which his transaction will be processed (every hour, so 24 possible times). If the user insists on another time of day, he can cancel his Recurring Buy schedule and try again. ￼ ￼ The Recurring Buy feature as displayed on bullbitcoin.com/recurring-buys The schedule is then displayed to the user, showing the time and date at which transactions that will take place in the future. The user will be able to see how long his current balance will last. He can follow the progress of the dollar-cost averaging schedule, monitor in real time his average acquisition cost, and audit each transaction individually. At this point, the user can and should change the Bitcoin address of his next transactions to avoid address re-use. Address re-use is not forbidden, but it is highly discouraged. After having modified the Bitcoin addresses, there is nothing left for the user to do except watch the bitcoins appear in his Bitcoin wallet every day! The Bitcoins are sent right away at the time of purchase. Bitcoin transactions using the Recurring Buy feature will have the lowest possible Bitcoin network transaction fee to avoid creating upwards pressure on the fee market impact other network users. ￼ ￼ What users see after first activating a schedule The Recurring Buy schedule will be cancelled automatically at the time of the next purchase if the balance is insufficient. He can add more funds to his balance whenever he wants. The Recurring Buy schedule will continue until the target amount is reached or until the account balance runs out. The user can cancel his Recurring Buy schedule whenever he wants. If the user wants to change the amount or duration of the schedule, he can simply cancel his current schedule and create a new one. Each schedule has a unique identifier so that users can keep track of various schedules they perform over time. Once a schedule is completed, either fully or partially, a summary will be provided which shows the number of transactions completed, the average acquisition cost, the total amount of Bitcoin purchase and the total amount of fiat spent. Useful for accounting! ￼ ￼ A partially completed Recurring Buy schedule cancelled after 9 days due to insufficient funds Though process in making our design choices Recurring Bitcoin Purchases vs. Recurring Payment/Funding The first and most important design choice was to separate the processes of funding the account balance with fiat (the payment) from the process of buying Bitcoin (the purchase). Users do not need to make a bank transaction every time they do a Bitcoin purchase. They first fund their account manually on their own terms, and the recurring purchases are debited from their pre-funded account balance. Another approach would have been to automatically withdraw fiat from the user’s bank account (e.g. a direct debit or subscription billing) for each transaction (like our friends at Amber) or to instruct the user to set-up recurring payments to Bull Bitcoin from their bank account (like our friends at Bittr). The downside of these strategies is that they require numerous bank transactions which increases transaction fees and the likelihood of triggering fraud and compliance flags at the user’s bank. However, this does remove the user’s need to keep larger amounts of fiat on the exchange and reduces the friction of having to make manual bank payments. Bull Bitcoin is currently working on a separate “Recurring Funding” feature that will automatically debit fiat from the user’s bank accounts using a separate recurring schedule with a minimum frequency of once a week, with a target of once every two weeks or once a month to match the user’s income frequency. This can, and will, be used in combination from the “Recurring Buy” feature, but both can be used separately. The ultimate experience that we wish to achieve is that users will automatically set aside, each paycheck (two weeks), a small budget to invest in Bitcoin using the “Recurring Funding” feature which is sufficient to refill their account balance for the next two weeks of daily recurring purchases. Frequency of transactions The second important decision was about customizing the frequency of the schedule. We decided to make it “each day” only. This is specifically to ensure users have a large enough sample size and remain consistent which are the two key components to a successful dollar-cost averaging strategy. A higher amount of recurring transactions (larger sample size) will result in the user’s average acquisition being closer to the actual average Bitcoin price over that period of time. Weekly or monthly recurring purchases can provide the same effectiveness if they are performed over a duration of time which is 7x longer (weekly) or 30x longer (monthly). It is our belief that the longer the duration of the schedule, the more likely the user is to cancel the recurring buy schedule in order to “buy the dip”. Dollar-cost averaging is boring, and watching sats appear in the wallet every day is a good way to reduce the temptation of breaking the consistency. We do not force this on users: they can still cancel the schedule if they want and go all-in. We consider it more of a gentle nudge in the right direction. Frequency of withdrawals (one purchase = one bitcoin transaction) This is one of the most interesting design choices because it is a trade-off between scalability (costs), privacy and custody. Ultimately, we decided that trust-minimization (no custody) and privacy were the most important at the expense of long-term scalability and costs. Realistically, Bitcoin network fees are currently low and we expect them to remain low for the near future, although they will certainly increase massively over the long-term. One of the ways we mitigated this problem was to select the smallest possible transaction fee for transactions done in the context of Recurring Buy, separate from regular transaction fees on regular Bitcoin purchases (which, at Bull Bitcoin, are very generous). Note: users must merge their UTXOs periodically to avoid being stuck with a large amount of small UTXOs in the future when fees become more expensive. This is what makes me most uncomfortable about our solution. I hope to also solve this problem, but it is ultimately something Bitcoin wallets need to address as well. Perhaps an automated tool in Bitcoin wallets which merges UTXOs periodically when the fees are low? Food for thought. When transaction fees and scalability becomes a problem for us, it will have become a problem for all other small payments on the Bitcoin network, and we will use whatever solution is most appropriate at that time. It is possible that Lightning Network ends up being the scalability solution, although currently it is logistically very difficult to perform automated payouts to users using Lightning, particularly recurring payouts, which require users to create Bolt11 invoices and to convince other peers in the network to open channels and fund channels with them for inbound capacity. These are the general trade-offs: Send a Bitcoin transaction for every purchase (what we do) - Most expensive for the exchange - Most expensive for the user (many UTXOs) - Increases Bitcoin Network UTXOs set - Inefficient usage of block space - Most private - Zero custody risk Keep custody of the Bitcoin until the schedule is over or when the user requests a withdrawal (what Coinbase does) - No additional costs -No blockchain bloating - Same level of privacy - High custody risk Batch user transactions together at fixed intervals (e.g. every day) - Slightly lower transaction costs for the exchange - Same costs for the user - Slightly more efficient use of block space - Same level of UTXO set bloating - Much lower level of privacy - Slightly higher custody risk Single address vs multiple addresses vs HD keys (xpubs) The final decision we had to make was preventing address re-use and allowing users to provide an HD key (xpub) rather than a Bitcoin address. Address re-use generally decreases privacy because it becomes possible for third-party blockchain snoops to figure out that multiple Bitcoin transactions are going to the same user. But we must also consider that even transactions are sent to multiple addresses, particularly if they are small amounts, it is highly likely that the user will “merge” the coins into a single transaction when spending from his wallet. It is always possible for users to prevent this using Coinjoin, in which there is a large privacy gain in not re-using addresses compared to using a single address. It is important to note that this does not decrease privacy compared to regular Bitcoin purchases on Bull Bitcoin outside of “Recurring Buy”. Whether a user has one transaction of $1000 going to a Bitcoin address or 10x$100 going that same Bitcoin address doesn’t reveal any new information about the user other than the fact he is likely using a dollar-cost averaging mechanism. It is rather a missed opportunity to gain more privacy. Another smaller decision was whether or not we should ask the user to provide all his addresses upfront before being able to activate the schedule, which would completely remove the possibility of address re-use. We ultimately decided that because this process can take a very long time (imagine doing Recurring Buy every day for 365 days) it is better to let the user do this at his own pace, particularly because he may eventually change his Bitcoin wallet and forget to change the addresses in the schedule. There are also various legitimate use-cases where users have no choice but to re-use the same address . A discussion for another day! Asking the user to provide an XPUB is a great solution to address re-use. The exchange must dynamically derive a new Bitcoin address for the user at each transaction, which is not really a technical challenge. As far as I can tell, Bittr is the only Bitcoin exchange exchange which has implemented this technique. Kudos! It is however important that the user doesn’t reuse this XPUB for anything else, otherwise the exchange can track his entire wallet balance and transaction history. It is worth noting that not all wallets support HD keys or have HD keys by default (e.g. Bitcoin Core). So it is imperative that we offer the option to give Bitcoin addresses. We believe there is a lot of potential to create wallet coordination mechanisms between senders and recipients which would make this process a lot more streamlined. In the future, we will certainly allow users to submit an XPUB instead of having to manually input a different address. But for now, we wanted to reduce the complexity to a minimum. Conclusion: personal thoughts I have a somewhat unique perspective on Bitcoin users due to the fact that I worked at the Bitcoin Embassy for almost 4 years. During this time, I had the opportunity to discuss face-to-face with thousands of Bitcoin investors. One of my favourite anecdotes is a nocoiner showing up at our office in December 2013 with a bag full of cash attempting to buy Bitcoin, “I know how to read a chart”, furious after being turned away. Many people who went “all-in” for short-term gains (usually altcoins) would show up to the Bitcoin Embassy office months later with heart-breaking stories. This isn’t what I signed up for. My goal is to help people opt-out of fiat and, ultimately, to destroy the fiat currency system entirely. This instilled in me a deep-rooted concern for gambling addiction and strong aversion to “trading”. I do not believe that Bitcoin exchanges should blindly follow “what the market dictates”. More often than not, what dictates the market is bad habits users formed because of the other Bitcoin services they used in the past, what other people are used to, and what feels familiar. Running a Bitcoin company should be inseparable from educating users on the best practices, and embedding these best practices into the user experience is the best way for them to learn. Another important anecdote which motivated me to build a dollar-cost averaging tool is a person very close to me that had made the decision to buy Bitcoin, but was so stressed out about when was the right time to buy that they ended up not buying Bitcoin for a whole 6 months after funding their Bull Bitcoin account. That person eventually gave up and ultimately invested a large amount all at once. In hindsight, it turned out to be one of the worst possible times to invest in Bitcoin during that year. Investing in Bitcoin can, and should be, a positive and rewarding experience. Buying Bitcoin every day is the right strategy, but it is not necessarily lead to the best outcome. The reality is that the best time to buy Bitcoin is at when market hits rock bottom (obviously). Sometimes, the upside from buying the dip can be much bigger than the risk (e.g. when the price dropped below $200 in 2015). But these are exceptions rather than the rule. And the cost of chasing dips is very high: stress, investing time and mental energy, and the very real psychological trauma which results from making bad trading decisions. Ultimately, it’s better to do the right thing than being lucky, but it’s not always a bad idea to cheat on your dollar-cost averaging from time to time if you can live with the costs and consequences. Yours truly, Francis
Weekly Update: Parachute Townhall, Welcome $GET to ParJar, Uptrennd reaches 50k members, Fantom on IncognitoChain... – 6 Dec - 12 Dec'19
Hi Parachuters! As part of 2 of 3 from today's rapid catch up series of pending updates, here’s your week at Parachute + partners (6 Dec - 12 Dec'19): As mentioned last week, Cap and Ice hosted a townhall to talk about where we are at and where we are heading along with ample feedback and Q&A from the community. We covered a lot of ground: "value hypothesis for ParJar, Product Market fit, and our growth approach for 2020...performance of two key PAR utility metrics, staking and gas, and how we see growth for each in 2020...questions from the community and reviewed upcoming community initiatives". Click here to catch up on all that happened. GET Protocol’s $GET token was added to ParJar this week. Belated Birthday wishes to Doc Vic from Cuba. Jason lost a 5k $PAR wager with Cap on Victor’s age. Haha. Congratulations to Martha for winning this week’s Parena. As per the latest Fantasy Premier League (#FPL) update shared by LordHades this week, he is still ruling the charts at the top with NovelCloud and Alexis hot on his heels. From next week, "You can now view your first opponent in the 2019/20 FPL Cup on the My Team page - under Leagues". While you slay those miles with the Parachute Running Club (which has done 44 miles so far BTW), here’s a podcast to listen to. Cap’s recommendation: "It's geared towards people building products - but super super useful to think about any products you use. Skip to like 9 minutes in to skip through all the advertiesments ". Yes, I know. Cap wouldn’t be Cap without typos. Typos FTW! Parachute townhall Parachute-themed shirts designed by Doc Vic and Alejandro on Doc’s birthday. These are sick! If you want to see yourself on the Parachute world map, make sure to enter your location here. The entries are anonymous. In this week's Parachute Fantasy Football League update, Hang is in the first position followed by Clinton and Andy. Connor made it to the playoffs and is now in 4th position. So it means farewell to Nilz, Ken, Kamo and Cap from this season. CoD mobile players, don't forget to join the Parachute WarZone hosted by Doc Vic from Cuba. I hear there's $PAR and $AMGO to be won! The TTR Hat Contest ended this week with some solid entries running in the lead. Epic creation Wendell! In this week’s creative prompt by Jason, Parachuters had to “do 3 nice things for a total stranger”. Basically, be a true blue Parachuter 😊. For this week's Two-for-Tuesday, Gian made it free-for-all. No theme. Post music as you wish and win 500 $PAR. Cool! Benjamin and Charlotte hosted trivias in TTR this week. Those were loads of fun! Andy announced the start of a College Football Bowl Game Pickem contest in Parachute. 100k $PAR prize pool. Doc Vic hosted another round of Champions League wager this week in TTR. So much epicness in one picture. Jose, you are a genius! Andy's Advent Calendar journey continues Catch up on the latest aXpire update and 20k AXPR burn here and here respectively. As you would already know, instead of pitting both startups against each other, XIO decided to accept both Opacity and Uptrennd into the incubator program and opened up staking for them. This marks the official launch of the XIO Blockchain Incubator and it’s been a roaring start with USD 7k worth of tokens locked up in one hour and Opacity portal getting oversubscribed in no time. Video instructions for staking can be found here. Read up on the startups here. In three days, the total staking crossed 1M XIO levels. Insane! That is a great metric to measure performance. How does the $XIO token play a role in all this? The crew explained in this tweet thread. And with that a series of related discussions got off starting with the possibility of self-nomination for startups. Have a sub-100 CMC project that you think should be part of the incubator? Don’t forget to tag them. Plus, a cool 25k $XIO giveaway was launched. Remember, meaningful conversation is always welcome at the incubator and more often than not, they get rewarded. Check out the latest update on the Birdchain App SMS feature along with an expanded list of supported countries. Silent Notary reduced the $LAW token requirement for running a Masternode from 100M to 20M this week. Russian research company sudexpa.ru also gave its vote of confidence to Silent Notary in terms of its immutability. Wibson Marketing Manager Fi Scantamburlo attended the Latin American Bitcoin Conference Uruguay to speak on Data privacy, monetisation and how Wibson helps achieve these. Opacity now allows shared file preview for uploaded docs. Shared File Preview on Opacity Fantom's foray into the Afghan Ministry of Health's efforts to fight counterfeit drugs and other public health initiatives were covered by Forbes this week. Last week, we shared that Sikoba's e-voting platform, Itugen, which is based on Fantom’s Lachesis consensus was released. This week, they published its technical whitepaper. With so many moving parts in the project and so much happening all around, a recap is always a welcome refresher to catch up. $FTM got listed on South Korea’s Coinone with a $KRW pairing. It was also integrated with the IncognitoChain project’s pDEX with a $pUSDT pairing (remember, Harmony was added to the same platform a few days back?). IncognitoChain allows cryptos to be transacted privately using sidechains including those coins/tokens which are not privacy-oriented. Fantom also launched a developer portal and technical documentation ahead of the XAR Network mainnet release. The interoperability bridge is out as well. This allows both ERC20 and BEP2 token holders to move their tokens to the XAR Network. The wallet allows both staking and delegation. For the guide to joining XAR Network as a validator node, click here. A simple guide to staking on XAR Network can be found here. The team also sat down for an AMA with COTI this week. Blockchain Magazine’s interview of Michael was published. Continuing with Uptrennd’s 24 Days of Celebrations started last week, this week they hosted an Escape Room contest and Photo contest. The latest $1UP tokenomics update can be seen here. After 11 months, the platform now has 50k users across 177 countries. Wowza! And wicked stats on the engagement metrics as well. Jeff’s interview with Crypto Beadles came out this week. A few entries for the Uptrennd Photo Contest Click here and here for the latest District Weekly and Dev Update from District0x. In case you missed this week’s Dapp Digest, you can watch it here. Aragon fans will be in for a treat since it features Aragon Co-Founder Luis Cuende as a special guest. Remember, we had discussed last week that the Shuffle Monster Raffle had crossed a 10k $SHUF pool. Turns out it got to 13k+. Wow! The latest Hydro developer update is a comprehensive roundup from the entire ecosystem. VCC Exchange listed $HYDRO with a $BTC pairing. Hydro’s security tokenisation protocol, Hail, moved to mainnet this week. The team travelled to Boston for MassChallenge Fintech. Hydro will be hosting a Banking-as-a-Service happy hour next week to talk on how they are building solutions in the BaaS space. For starters, don’t forget to read their article on blockchain applications in finance. The team appeared for an AMA with Apache Traders which also featured a 45k $HYDRO giveaway. Digital payments platform VoPay is now partnered with Hydro for end-to-end payment solutions using Hydrogen API and other Hydro tools. Hydro’s smart contract was audited by Callisto and passed their test with flying colours except for one "low severity" issue. The result: "The contract can be deployed". CTO Tim Allard was interviewed by Ethereum Network Nigeria as part of their Ethereum personality chat series. For the latest update on the community explorer Frost, click here. In Pynk’s first guest blog post, community member (or, Pynkster) Alistaire Wallace talks about what the coming year could hold for Pynk and its community of predictors. Check out the transcript of Sentivate’s AMA with tehMoonwalkeR here. Sentivate’s new office in PA is shaping up quite well This week at OST was all about the Pepo app: from angel investor Kartik to Rocket NFT’s Alex Masmej joining the platform, accelerator The Fledge using Pepo Conversations to power community-sourced improvements to businesses, Home for the Holidays Challenge to explain crypto/blockchain to relatives (with a total USD 2k in Pepo coins in prizes) and a “best lifehack” bounty posted by Jason on the app. If you’ve missed all SelfKey news from the past month, you can catch up from the November progress report. Also, did you know that the group Legion of Doom which was once considered to be the most capable hacking group in the world was in a long drawn feud with Masters of Deception in what is now known as the Great Hacker War? Learn more info like this from SelfKey’s latest article on hacking groups. Constellation CEO Ben Jorgensen will be speaking at the Crypto 2020 Summit. If you’re attending, make sure to say Hi. Arena Match announced a trading competition on DDEX with 4M $AMGO tokens to be won. Lucky Bluff Poker will be sponsoring next week’s Arena Match Raffle. The latest Harmony update compilation from the whole team can be found here. In the latest Pangea statistics (Harmony’s experimental staking game to test the limits of its tech), the average staking position is 1.8M $ONE with 75% of participants operate nodes themselves while the rest use delegates. Plus, check out the newest upgrades here. Honest Mining announced mainnet support for the native $ONE token swap. $ONE is also in consideration for listing on Binance US. The token was listed on Pionex this week. The Intellishare website registration and login functions will be down next week for a scheduled upgrade. Also, $INE traders make sure to keep a note of WBFex temporarily disabling the $ETH trading pair. Jobchain’s $JOB token got listed on Bilaxy exchange, P2PB2B exchange, SWFT Blockchain wallet and SWOP.SPACE exchange. The project was also given an A+ score by Xangle. Congrats! And with that, it’s a wrap. See you again soon with another weekly update. Bye!
BC Hydro Scams are back and more professional than others.
Heads up. The same BC Hydro scam is back targeting businesses with easily obtained listing information. Call starts with asking to speak with manager, owner, etc. The manager is instructed to call back an 800 number. An auto attendant picks up. "Welcome to BC Hydro, press 1 for ...." Gentleman with proper grammar and no accent picks up. "Bla bla business information. We need a deposit, urgent. Must pay at office on Gilmore in cash. We have technicians scheduled to disconnect you in 30 minutes. Since it's so far from you. I'll make an exception. I'll let you do a one time payment with our commercial customer system." Puts me on hold, hold music and such. Directs me to withdraw $1500 and deposit at an address near me. Guy than asks for a number where I can receive texts. Sends me an image of QR code beside a BC Hydro logo. I was near the address, so curiosity sent me to a corner store with a Bitcoin machine. Sure enough, he wants me to deposit into the machine. I was supposed to meet some guys after the deposit so that they would disconnect my power and let the scan my deposit receipt. I never made the deposit so wasn't sure if this QR code is the bitcoin wallet or what. I almost got fooled because: 1-800 number, auto-attendant, bc hydro logo, hold music. All of which are easily fake-able. Where they failed: not the same 800 number as bc hydro, no bc hydro office, asking for bitcoins, no verification of account holder information.
**Last updated: May 30, 2018: Updated wallet info with release of Trinity. This 4 part series from the IOTA foundation covers most of the technical FUD centered at IOTA. https://blog.iota.org/official-iota-foundation-response-to-the-digital-currency-initiative-at-the-mit-media-lab-part-1-72434583a2 Also the official IOTA faq on iota.org answers nearly all of these questions if you want to hear the answers directly. Purpose of Writing Since posting FUD is so ridiculously low-effort in comparison to setting the record straight, I felt it necessary to put a log of copy-pastas together to balance the scales so its just as easy to answer the FUD as it was to generate it. So next time you hear someone say "IOTA is centralized", you no longer have to take an hour out of your day and spin your wheels with someone who likely had an agenda to begin with. You just copy-paste away and move on. It's also worth mentioning IOTA devs are too damn busy working on the protocol and doing their job to answer FUD. So I felt a semblance of responsibility. Here they are. These answers are too my understanding so if you see something that doesn't look right let me know! They are divided into the following categories so if you are interested in a specific aspect of IOTA you can scroll to that section. 1) WALLET 2) COMMUNITY 3) INVESTING 4) TECHNICAL
IOTA was hacked and users funds were stolen!
First, IOTA was not hacked. The term “hacked” is thrown around way too brazingly nowadays and often used to describe events that weren’t hacks to begin with. Its a symptom of this space growing way too fast creating situations of the blind leading the blind and causing hysteria. What happened: Many IOTA users trusted a certain 3rd party website to create their seed for their wallets. This website silently sent copies of all the seeds generated to an email address and waited till it felt it had enough funds, then it took everyones money simultaneously. That was the ”hack”. https://blog.iota.org/the-secret-to-security-is-secrecy-d32b5b7f25ef The lesson: The absolute #1 marketed feature of crypto is that you are your own bank. Of everything that is common knowledge about crypto, this is at the top. But being your own bank means you are responsible for the security of your own funds. There is no safety net or centralized system in place that is going to bail you out. For those that don’t know (and you really should if you’ve invested in anything crypto), your seed is your username-pw-security question-backup email all rolled into one. Would you trust a no-name 3rd party website to produce your username+pw for your bank account? Because thats essentially what users did. The fix: Make your seed offline with the generators in the sidebar or use dice. This is outlined in the “how to generate wallet and seed” directly following. The trinity and carriota wallets will have seed generators within them upon their release.
How to generate wallet and seed
1) Download official trinity wallet here 2) follow the instructions on the app. 3) Do not run any apps in conjunction with the trinity app. Make sure all other apps are completely closed out on your device.
Are you sure a computer can’t just guess my seed?
An IOTA seed is 81 characters long. There are more IOTA seed combinations than atoms in the universe. All the computers in the world combined would take millions billions of years just to find your randomly generated one that’s located somewhere between the 0th and the 2781st combination. The chance for someone to randomly generate the exact same seed as yours is 1 / (2781). If you can’t fathom the number 27 ^ 81, this video should help: https://www.youtube.com/watch?v=p8YIdmwcubc
Download Bolero and run! Bolero is an all-in-one full node install package with the latest IOTA IRI and Nelson all under a one-click install! https://github.com/SemkoDev/bolero.fun/releases "If you want to help the network then spam the network. If you really want to help the network then create a full node and let others spam you!"
No questions or concerns get upvoted, only downvoted!
That’s just the nature of this business. Everyone in these communities has money at stake and are extremely incentivized to keep only positive news at the top of the front page. There is nothing you're going to do about that on this subreddit or any crypto subreddit. It's just a reddit fact of life we have to deal with. Everyone has a downvote and everyone has an upvote. But what can be done is just simply answer the questions even if they are downvoted to hell. Yea most people wont' see the answers or discussion but that one person will. every little bit counts. I will say that there are most certainly answers to nearly every FUD topic out there. Every single one. A lot of the posts I'm seeing as of late especially since the price spike are rehashed from months ago. They are often not answered not because there isn't an answeexplanation, but because regulars who have the answers simply don't see them (for the reason listed above). I can see how it's easy for this to be interpreted (especially by new users) as there not being an answer or "the FUDsters are on to something" but thats just not the case.
IOTA Devs do not respond appropriately to criticism
When critiquers provide feedback that is ACTUALLY useful to the devs, then sure they'll be glad to hear it. So far not once has an outside dev brought up something that the IOTA devs found useful. Every single time it ends up being something that was already taken into consideration with the design and if the critiquer did an ounce of research they would know that. Thus you often find the IOTA devs dismissing their opinion as FUD and responding with hostility because all their critique is really doing is sending the message to their supporters that they are not supposed to like IOTA anymore. Nick Johnson was a perfect example of this. The Ethereum community was co-existing [peacefully]with IOTA’s community (as they do with nearly all alt coins) until Nick wrote his infamous article. Then almost overnight Ethereum decided it didn’t like IOTA anymore and we’ve been dealing with that shit since. As of today, add LTC to that list with Charlie’s (even admitting) ignorant judgement of IOTA. 12/17/2017: Add John McAfee (bitcoin cash) and Peter Todd (bitcoin) to the list of public figures who have posted ignorantly on IOTA.
A lot of crypto communities certainly like to hate on IOTA...
IOTA is disrupting the disrupters. It invented a completely new distributed ledger infrastructure (the tangle) that replaces the blockchain and solves all of its fundamental problems (namely fees and scaling). To give you an idea of this significance, 99% of the cryptocurrencies that exist are built on a block chain. These projects have billions of dollars invested into them meaning everyone in their communities are incentivized to see IOTA fail and spread as much FUD about it as possible. This includes well known organizations, public figures, and brands. Everyone commenting in these subreddits and crypto communities have their own personal money at stake and skin in the game. Misinformation campaigns, paid reddit posters, upvote/downvote bots, and corrupt moderators are all very real in this space.
All IOTAs that will ever exist were sold at the ICO in 2015. There was no % reserved for development. Devs had to buy in with their personal money. Community donated back 5% of all IOTA so the IOTA foundation could be setup.
No inflation schedule? No additional coins? How is this sustainable?
Interestingly enough, IOTA is actually the only crypto that does not run into any problems with a currency cap and deflationaryism. Because there are zero fees, you will always be able to pay for something for exactly what it's worth using IOTA, no matter how small the value. If by chance in the future a single iota grows so large in value that it no longer allows someone to pay for something in fractions of a penny, the foundation would just add decimal points allowing for a tenth or a hundreth or a thousandth of an iota to be transacted with. To give you some perspective, if a single IOTA equals 1 penny, IOTA would have a 27 trillion dollar market cap (100x that of Bitcoin's today)
IOTA is not for P2P, only for M2M
With the release of the trinity wallet, it's now dead simple for anyone to use IOTA funds for P2P. Try it out.
Companies technically don’t have to use the IOTA token
Yes they do Worth clarifying that 0 iota data transactions are perfectly fine and are welcomed since they still provide pow for 2 other transactions and help secure the network. In the early stages, these types of transactions will probably be what give us the tps/pow needed to remove the coordinator and allow the network defend 34% attacks organically. But... if someone does not want to sell or exchange their data for free (0 IOTA transaction), then Dominic is saying that the IOTA token must be used for that or any exchange in value on the network. This is inherently healthy for the ecosystem since it provides a neutral and non-profit middle ground that all parties/companies can trust. If one company made their own token it wouldn’t be trusted since companies are incentivized by profits and nothing is stopping them from manipulating their token to make them more money. Thus, the IOTA foundation will not partner with anyone who refuses to take this option off the table.
All these companies are going to influence IOTA development!!
These companies have no influence on the development of IOTA. They either choose to use it or they don’t.
Internet of things is cheap and will stay cheap
Internet of things is one application of IOTA and considered by many to be the 4th industrial revolution. Go do some googling. IOTA having zero fees enables M2M for the first time in history. Also, if a crypto can do M2M it sure as shit can do M2P and P2P. M2M is hard mode.
Investing in a project in its early stages was something typically reserved for wealthy individuals/organizations before ICO’s became a thing. With early investing comes much less hand holding and more responsibility on the user to know what they are doing. If you have a hard time accepting this responsibility, don’t invest and wait for the technology to get easier for you. How many people actually knew how to use and mine bitcoin in 2009 before it had all its gui infrastructure? IOTA is a tangle, the first of its kind. NOT a copy paste blockchain. As a result wallets and applications for IOTA are the first of their kind and translating the tangle into a nice clean user-friendly blockchain experience for the masses is even more taxing.
Why is the price of my coin falling?!
This may be the most asked question on any crypto subreddit but it's also the easiest to explain. The price typically falls when bad things happen to a coin or media fabricates bad news about a coin and a portion of investors take it seriously. The price increases when good things happen to a coin, such as a new exchange listing or a partnership announced etc.. The one piece that is often forgotten but trumps all these effects is something called "market forces". Market forces is what happens to your coin when another coin gets a big news hit or a group of other coins get big news hits together. For example, when IOTA data marketplace released, IOTA hit a x5 bull run in a single week. But did you notice all the other alt coins in the red? There are a LOT of traders that are looking at the space as a whole and looking to get in on ANY bull action and will sell their other coins to do so. This effect can also be compounded over a long period of time such as what we witnessed when the bitcoin fork FOMO was going on and alt coins were squeezed continuously to feed it for weeks/months. These examples really just scratch the surface of market forces but the big takeaway is that your coin or any coin will most certainly fall (or rise) in price at the result of what other coins are doing, with the most well known example being bitcoin’s correlation to every coin on the market. If you don't want to play the market-force game or don't have time for it, then you can never go wrong buying and holding. It's also important to note that there are layers of investors. There's a top layer of light-stepping investors that are a mixture of day traders and gamblers trying to jump in and jump out to make quick money then look for the next buying (or shorting) opportunity at another coin. There's a middle layer of buyers and holders who did their research, believe in the tech and placing their bets it will win out in the long run. And the bottom layer are the founders and devs that are in it till the bitter end and there to see the vision realized. When a coin goes on a bull run, always expect that any day the top layer is going to pack up and leave to the next coin. But the long game is all about that middle layer. That is the layer that will be giving the bear markets their price-drop resistance. That is why the meme "HODL" is so effective because it very elegantly simplifies this whole concept for the common joe and makes them a part of that middle layer regardless if they understand whats going on or not.
How is IOTA free and how does it scale
IOTA is an altruistic system. Proof of work is done in IOTA just like bitcoin. Only a user’s device/phone must do pow for 2 other transactions before issuing one of its own. Therefore no miners and no fees. And the network becomes faster the more transactions are posted. Because of this, spamming the network is encouraged since they provide pow for 2 other transactions and speed up the network.
IOTA is centralized
IOTA is more decentralized than any blockchain crypto that relies on 5 pools of miners, all largely based in China. Furthermore, the coordinator is not a server in the dev’s basement that secretly processes all the transactions. It’s several nodes all around the globe that add milestone transactions to show the direction of the IF’s tangle within the DAG so people don’t accidentally follow a fork from a malicious actor. Anyone with the know-how can fork the tangle right now with a double-spend. But no one would follow their fork because the coordinator reveals which tangle is the legit IF one. If the coordinator wasn’t there (assuming low honest-transaction volume), there would be no way to discern which path to follow especially after the tangle diverges into forks of forks. Once throughout of honest transactions is significant enough, the “honest tangle” will replace the coordinated one and people will know which one to follow simply because it’s the biggest one in the room. Referencing the coordinator is also optional. Also, if you research and understand how IOTA intends to work without the coordinator, it’s easier to accept it for now as training wheels. I suggest reading pg 15 and on of the white paper analyzing in great depth how the network will defend different attack scenarios without a coordinator. For the past several months, IOTA foundation has been using St Petersburg college’s super computer to stress test IOTA and learn when they can turn the coordinator off. There will likely be a blog about the results soon. This is another great read covering double spends on IOTA without a coordinator: www.tangleblog.com/2017/07/10/is-double-spending-possible-with-iota/ This too: http://www.reddit.com/Iota/comments/7eix4a/any_iota_guru_that_can_explain_what_this_guy_is/dq5ijrm Also this correspondence with Vitalik and Come_from_Beyond https://twitter.com/DavidSonstebo/status/932510087301779456 At the end of the day, outstanding claims require outstanding evidence and folks approaching IOTA with a “I’ll believe it when I see it” attitude is completely understandable. It’s all about your risk tolerance.
Masked authenticated messages exist right now so data can be transferred privately. Very important for businesses.
Centralized coin mixer is out that foundation runs. Logs are kept so they can collect data and improve it Folks can copy the coin mixer code and run it themselves. Goal is for mixer to be decentralized and ran by any node.
How do nodes scale? How on earth can all that data be stored?
Full nodes store, update and verify from the last snapshot, which happens roughly every month. Its on the roadmap to make snapshotting automatic and up to each full node’s discretion.With automatic snapshots, each full node will act as a partial perma-node and choose when to snapshot its tangle data. If someone wants to keep their tangle data for several months or even years, they could just choose not to snapshot. Or if they are limited on hard drive space, they could snapshot every week. Perma-nodes would store the entire history of the tangle from the genesis. These are optional and would likely only be created by companies who wish to sell historical access of the tangle as a service or companies who heavily use the tangle for their own data and want to have quick, convenient access to their data’s history. Swarm nodes are also in development which will ease the burden on full nodes. https://blog.iota.org/iota-development-roadmap-74741f37ed01
Hi, I have had bitcoin core running on a raspberry pi for about 4 month now (followed the raspibolt instructions from stadicus) and it worked fine until 3 days ago it suddenly stopped working. With bitcoin-cli blockchaininfo i got
error code: -28 Loading block index...
or something similar. When a looked in the debug log file i saw this:
2019-08-29T16:48:08Z BerkeleyEnvironment::Open: LogDir=/home/bitcoin/.bitcoin/database ErrorFile=/home/bitcoin/.bitcoin/db.log 2019-08-29T16:48:11Z init message: Loading banlist... 2019-08-29T16:48:11Z Cache configuration: 2019-08-29T16:48:11Z * Using 2.0 MiB for block index database 2019-08-29T16:48:11Z * Using 8.0 MiB for chain state database 2019-08-29T16:48:11Z * Using 90.0 MiB for in-memory UTXO set (plus up to 47.7 MiB of unused mempool space) 2019-08-29T16:48:11Z init message: Loading block index... 2019-08-29T16:48:11Z Opening LevelDB in /home/bitcoin/.bitcoin/blocks/index 2019-08-29T16:48:11Z Opened LevelDB successfully 2019-08-29T16:48:11Z Using obfuscation key for /home/bitcoin/.bitcoin/blocks/index: 0000000000000000 2019-08-29T16:54:14Z 2019-08-29T16:54:14Z Bitcoin Core version v0.18.1 (release build) 2019-08-29T16:54:14Z Assuming ancestors of block 0000000000000000000f1c54590ee18d15ec70e68c8cd4cfbadb1b4f11697eee have valid signatures. 2019-08-29T16:54:14Z Setting nMinimumChainWork=0000000000000000000000000000000000000000051dc8b82f450202ecb3d471 2019-08-29T16:54:14Z Using the 'standard' SHA256 implementation 2019-08-29T16:54:15Z Default data directory /home/bitcoin/.bitcoin 2019-08-29T16:54:15Z Using data directory /home/bitcoin/.bitcoin 2019-08-29T16:54:15Z Config file: /home/bitcoin/.bitcoin/bitcoin.conf 2019-08-29T16:54:15Z Using at most 20 automatic connections (1024 file descriptors available) 2019-08-29T16:54:15Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements 2019-08-29T16:54:15Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements 2019-08-29T16:54:15Z Using 4 threads for script verification 2019-08-29T16:54:15Z scheduler thread start 2019-08-29T16:54:15Z HTTP: creating work queue of depth 16 2019-08-29T16:54:15Z Config options rpcuser and rpcpassword will soon be deprecated. Locally-run instances may remove rpcuser to use cookie-based auth, or may be replaced with rpcauth. Please see share/rpcauth for rpcauth auth generation. 2019-08-29T16:54:15Z HTTP: starting 4 worker threads 2019-08-29T16:54:15Z Using wallet directory /home/bitcoin/.bitcoin 2019-08-29T16:54:15Z init message: Verifying wallet(s)... 2019-08-29T16:54:15Z Using BerkeleyDB version Berkeley DB 4.8.30: (April 9, 2010) 2019-08-29T16:54:15Z Using wallet /home/bitcoin/.bitcoin 2019-08-29T16:54:15Z BerkeleyEnvironment::Open: LogDir=/home/bitcoin/.bitcoin/database ErrorFile=/home/bitcoin/.bitcoin/db.log 2019-08-29T16:54:16Z init message: Loading banlist... 2019-08-29T16:54:16Z Cache configuration: 2019-08-29T16:54:16Z * Using 2.0 MiB for block index database 2019-08-29T16:54:16Z * Using 8.0 MiB for chain state database 2019-08-29T16:54:16Z * Using 90.0 MiB for in-memory UTXO set (plus up to 47.7 MiB of unused mempool space) 2019-08-29T16:54:16Z init message: Loading block index... 2019-08-29T16:54:16Z Opening LevelDB in /home/bitcoin/.bitcoin/blocks/index 2019-08-29T16:54:16Z Opened LevelDB successfully 2019-08-29T16:54:16Z Using obfuscation key for /home/bitcoin/.bitcoin/blocks/index: 0000000000000000
After that there should follow a line about Loading the block index but it never appears. It seems to be restarting without shutting down correctly, maybe it cannot find the block index? I tried reinstalling core, recreating the symlink to the hdd and resetting the service but nothing happened. Everytime it starts it gets stuck at the same point. It also takes an awful long time to stop bitcoind but i don't know if that is normal. I found several people with similar problems online but noting helped so far. Has anyone had a similar problem in the past and found a solution to this?
Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out. This document is placed into public domain.
What is TTOR? CTOR? AOR?
Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block. The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid". (If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.) Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.
Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)
The proposed November upgrade combines two changes in one step:
Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.
Code changes in Bitcoin ABC
In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate. The main changes relating to transaction ordering are found in the file src/validation.cpp:
Function ConnectBlock previously had one loop, that would process each transaction in order, removing spent transaction outputs and adding new transaction outputs. This was only compatible with TTOR. Starting in November, it will use the two-loop OTI algorithm (see below). The new construction has no ordering requirement.
Function ApplyBlockUndo, which is used to undo orphaned blocks, is changed to work with any order.
When orphaning a block, transactions will be returned to the mempool using addForBlock that now works with any ordering (src/txmempool.cpp).
Serial block processing (one thread)
One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated. The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs)
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch. By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs)
The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized. There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo. There are issues with such order-sensitive concurrent block processing algorithms:
Since TTOR would be a consensus rule, parallel validation algorithms must also verify that TTOR is respected. The naive approach described above actually is able to succeed for some non-topological orders; therefore, additional checks would have to be added in order to enforce TTOR.
The worst-case performance can be that only one thread is active at a time. Consider the case of a block that is one long chain of dependent transactions.
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase. In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.
Advanced parallel techniques
With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.
Block propagation (graphene)
A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining. 'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message. To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings. The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.
Reversibility and technical debt
Can the change to CTOR be undone at a later time? Yes and no. For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well. For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.
Summary and outlook
Removing TTOR is the most disruptive part of the upgrade, as other block processing software needs to be updated in kind to handle transactions coming in any order. These changes are however quite small and they naturally convert the software into a form where concurrency is easy to introduce.
In the near term, TTOR / CTOR will show no significant performance differences for block validation. Note that right now, block validation is not the limiting factor in Bitcoin Cash, anyway.
In medium term, software switching over to concurrent block processing will likely want to use an any-order algorithm (like OTI). Although some additional code may be needed to enforce ordering rules in concurrent validation, there will still be no performance differences for TTOR / AOR / CTOR.
In the very long term, it is perhaps possible that CTOR will show advantages for full nodes with sharded UTXO databases, if that ever becomes necessary. It's probably way too early to care about this.
With TTOR removed, the further addition of CTOR is actually a very minor change. It does not require any other ecosystem software to be updated (they don't care about order). Not only that, we aren't stuck with CTOR: the ordering can be quite easily changed again in the future, if need be.
The primary near-term improvement from the CTOR will be in allowing a simple and immediate enhancement of the graphene protocol. This impacts a scaling bottleneck that matters right now: block propagation. By avoiding the topic of complex voluntary ordering schemes, this will allow graphene developers to stop worrying about how to encode ordering, and focus on optimizing the mechanisms for set reconciliation.
Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
Weekly Update: Welcome $INE to ParJar, 130k tips in 24 hours, XIO Startups Round I, Constellation mainnet... – 22 Nov - 28 Nov'19
Hi everyone! Here’s your week at Parachute + partners (22 Nov - 28 Nov'19): In this week’s Parena, Light's Nervous Nightingale took home the grand prize of 25k $PAR. Here’s Cap and Ice at a Chainlink Offchain Protocol event to spread the word on ParJar. Benjamin hosted a trivia in TTR based on the 3rd edition of his "Foundations of Fantom" series. Chris will be hosting a Thanksgiving Fantasy Football contest next week with a 100k $PAR pot. Hot damn! This week’s Two-for-Tuesday by Gian had the theme: units of time. Cool! TTR embarked on a crazy tipping adventure that could have broken Telegram. With 150k $PAR on the line, it’s amazing how robustly ParJar performed amidst this madness. By the end of the day, they reached 130k tips. Amazeballs! Charlotte’s Math trivia was fun as always. In the latest #PFFL update shared by Andy, the folks who are into the playoffs are Hang, Clinton, Alexis, Andy and Chris. Nilz, Ken, Kamo, Connor and Cap will be clashing for the remaining playoff spot. In the #FPL update for Game Week 13 shared by LordHades, LH is still leading the table with NovelCloud and Alexis following closely behind. Doc Vic (from Cuba) hosted another Champions League wager in TTR this week. $INE, the native token of IntelliShare project, was added to ParJar this week. Welcome! For more on what’s in store for ParJar, make sure to read Cap’s biweekly digestive. Lolarious stuff from Tito. Haha! aXpire’s Bilr intro video explains how the platform works. The latest updates from the project can be seen here. This week’s $AXPR burn saw 20k tokens sent to the genesis address. CEO Gary Markham will be speaking at a Hedge Fund Association event on Digital Assets on the 11th of December in NYC. The new website is out as well. Few weeks back, 2gether CEO Ramón Ferraz attended a Cecabank event on Securities as a speaker which was covered by the news outlet Expansión. Here's the full article. And thanks guys for the shoutout to my Hackernoon article (which I wrote with my co-author Rohit) on 5 Must Have Crypto Asset Management Tools. Founding Partner Luis Estrada travelled to the Fintech Innovation Summit 2019 to talk all things crypto and blockchain. The first group of potential XIO incubatees were revealed this week. Both are familiar names for Parachuters. They are Uptrennd and Opacity. Voting will begin on the 6th of December. Learn more about Round One from Zachary’s explainer video and email. An early sentiment poll of $1UP vs $OPQ was also put up. This led to an interesting consideration: should all nominated startups be stakeable instead of making it a competition. Did you know the USD value of total $BOMB tokens burned till date was ~144k. That is insane! The first $XIO liquidity pool on Uniswap was also started this week. Watch the explainer video to see how it works. Looks like Shingo from Ethos will be dipping his feet in the pool too. This is essentially what Zachary refers to as Proof-of-Liquidity. In a bit of a sobering news, Wysker filed for bankruptcy last week. Running a startup is hard. Best wishes from the Parachute crew on your next journey. As a Black Friday Surprise, referral earnings on Birdchain will be more over this weekend. For other updates, watch the video by CEO Joao Martins. The $XIO liquidity pool on Uniswap is a pretty ingenious solution for low marketcap projects Few weeks back, we had shared that Fantom was hosting a blockchain challenge at the AfricArena 2019 Summit with XAR Network. Here’s a summary from the contest. Congratulations to HouseAfrica. The BUIDL contest announced few weeks back was expanded to increase the prize pool and include developers working on the Cosmos stack as well. Click here for the latest technical update. The Afghan Ministry of Health will be using Fantom's public chain to fight the menace of counterfeit drugs and for other public health initiatives. Read more about it here. The news was covered by Coinspeaker, CryptoDiffer, Gagarin News, ICO Analytics and Upblock. Upblock’s review report on the project (along with an $FTM giveway) was also published this week. Anybody still unclear with what Fantom does should definitely see this video made by Blockcove and read the Fantom Vision by CMO Michael Chen. In light of Uptrennd’s nomination to the XIO Incubator Round I, founder Jeff Kirdeikis hosted an AMA. Original Content is always appreciated on the platform and the video guide explains how to create one. When you rank up on the platform, your points get locked. 30% of all those locked up points are burned each month to increase scarcity. The 2020 roadmap looks exciting! An Opacity Economic Advisory Board will be set up soon to explore optimisation of $OPQ tokenomics. Read more about it here. The Hydro crew travelled to a FinTech Friday event hosted by Barclays in NYC this week. Check out the latest Hydro Labs dev update here. The project is one of the semi-finalists at MassChallenge FinTech to be held in December. Good Luck! PayTrie is the latest dApp to join the Hydro dApp store as a 3rd party partner. For the latest update on the store, click here. Potluck at Hydro HQ on Thanksgiving As part of Silent Notary’s new marketing push, the first advertising campaign kick started on masternodes.online and foundico.com. And another feature on The Bitcoin News. Don’t forget to make a note of the updates in $SNTR Telegram channels. As mentioned last week, Sentivate crew sat down for an AMA with The Gem Hunters this week. Here’s the transcript. P.S. It includes the first glimpse of Artifacts as promised. Click here and here for the first glimpses of the Mycro Hunter app. OST is looking for a Senior Product Manager to join their crew. Apply if you’re up for it. SelfKey fans, make sure to vote for your favourite dashboard setup on the platform. The team, travelled to the Asia Cryptocurrency Investment Forum in Bangkok this week. Hope you had an opportunity to say Hi if you were around. The Constellation mainnet, Hypergraph, will launch soon. But what happens to your ERC20 $DAG tokens? Fret not. The team will put out detailed instructions for the swap to mainnet. Here's an article and video to start with. VP Finance Mateo Gold goes into more details of the swap. New to Constellation? Watch this community-made video that explains what the project is all about. Like Fantom, Upblock also published a detailed review report on Constellation. Their AMA with Chainlink was this week. Next week’s Arena Match Raffle is sponsored by Space Misfits. Winner of the next Arena Match Raffle will take home The Behemoth Blueprint worth ~USD 300 Catch up on the latest on District0x from the District Weekly. The District Registry was deployed to mainnet and has been running without hiccups. Read more about it in the Dev Update. The Nasdaq Composite was added to Pynk’s price prediction platform this week. Congratulations to Van P for winning the first Shuffle Raffle built by Trooper George. Harmony’s BEP2-ERC20 bridge now works both ways. The results of the 3 month survey from MBA researchers of UW Foster School exploring market readiness of Harmony are out. They indicate that the Credit Unions, Trade Finance and Mobile Data industries are ripe for disruption by Harmony. Go get’em! Harmony's Pangaea, an experimental game to interact with Harmony, development is progressing as scheduled. Harmony is now integrated with the IncognitoChain project which allows cryptocurrencies to be transacted privately using sidechains. Even those cryptos which are not primarily built as privacy coins. Noice! Click here to find out how to transact $ONE tokens privately. CEO Stephen Tse’s yearly review of updates will be a lifesaver for anyone who’s missed all Harmony news so far. And with that, it’s a wrap. See you again soon. Cheers!
With the large number of new readers coming to this sub we need to make information easy to access so those readers can make informed decisions. We all know there is an unusually large amount of Fear, Uncertainty and Doubt (FUD) surrounding EOS. Frankly, when clear evidence is provided it’s not that difficult to see EOS for the extremely valuable project it is. This post hopes to begin to put an end to all the misinformation by doing the following:
Giving a clear and concise answer to the most frequently asked questions in regards to EOS.
Giving a more in-depth answer for those who want to read more.
Allowing readers to make informed decisions by making credible information easy to access.
As EOS climbs the ranks we need to recognise there are going to be a lot of skeptical readers coming over and posting their questions. Sometimes they will be irrational, hostile and often just looking for a reaction. We should make it our responsibility to welcome everyone and refrain from responding emotionally to provocative posts, instead providing factual and rational answers. I will add to this post as and when I can, if you have any ideas or spot any mistakes let me know and I'll get them fixed ASAP. Im planning to add a bit on the team, centralisation and DPOS, governance and EOS VC shortly but please let me hear your suggestions!
1. How do you registeclaim your EOS tokens before June 2018?
Select Metamask, MyEtherWallet, or Ethereum Wallet
Follow the guide.
Remember that the reason you need to register your Ethereum ERC-20 address is to include your EOS tokens in order for the balance of your EOS Tokens to be included in the Snapshot if a Snapshot is created, you must register your Ethereum address with an EOS public key. The EOS snapshot will take place prior to the 1 June 2018. After this point your ERC-20 EOS tokens will be frozen. And you will be issued EOS tokens on the EOS blockchain.
So PLEASE REGISTER your Ethereum address NOW, don't forget about it, or plan on doing it some time in the near future.
There are a lot of submissions about this in /eos, so rather than making a new one please reply to this thread with any questions you may have. Don't forget to join the EOS mailing list: https://eos.io/#subscribe and join the EOS community on your platform(s) of choice: Telegram, Discord and/or Facebook. And remember, if anyone instructs you to transfer ETH to an EOS contract address that doesn't match the address found on https://eos.io you are being scammed.
2. How will the token the ERC-20 EOS tokens be transferred to the native blockchain?
There isn't one! Read the long answer then read it again, registering your Ethereum wallet is mandatory!
Within 23 hours after the end of the final period on June 1, 2018 at 22:59:59 UTC, all EOS Tokens will become fixed (ie. frozen) and will become non-transferrable on the Ethereum blockchain. In order to ensure your tokens are transferred over to the native blockchain you must register your Ethereum address with an EOS public key, if you do not you will lose all your tokens! I am not going to link any tutorials as there are many that can be found by searching Google and YouTube. block.one is helping with the development of snapshot software that can be used to capture the EOS token balance and registered EOS public key of wallets on the Ethereum blockchain. It is then down to the community to create the snapshot. This snapshot can be used when generating a genesis block for a blockchain implementing eos.io software. block.one will not be launching EOS blockchains or operating any of their nodes.
Exchange Support Some exchanges have announced that they will support the token swap. Although using this method will undoubtedly be much simpler than registering the tokens yourself it also comes with its pitfalls.
It is highly likely there are going to be multiple networks running on the eos.io software that use the snapshot. It is highly unlikely that exchanges will support them all.
It is highly likely that exchanges will not support airdrops that use the snapshot.
Exchanges that have announced support for the token swap include:
EOS.IO software is aiming to provide a decentralized operating system which can support thousands of industrial scale DApps by enabling vertical and horizontal scaling.
EOS.IO is software that introduces a blockchain architecture designed to enable vertical and horizontal scaling of decentralized applications. This is achieved through an operating system-like construct upon which applications can be built. The software provides accounts, authentication, databases, asynchronous communication and the scheduling of applications across multiple CPU cores and/or clusters. The resulting technology is a blockchain architecture that has the potential to scale to millions of transactions per second, eliminates user fees and allows for quick and easy deployment of decentralized applications.
CEO Brendan Blumer - Founder of ii5 (1group) and okay.com. He has been in the blockchain industry since 2014 and started selling virtual assets at the age of 15. Brenden can be found on the Forbes Cypto Rich List. Brendan can be found on Twitter.
CTO Dan Larimer - Dan's the visionary industry leader who built BitShares, Graphene and Steemit as well as the increasingly popular Proof of Stake Governance and Decentralised Autonomous Organization Concept. He states his mission in life is “to find free market solutions to secure life, liberty, and property for all.”. Dan can also be found on the Forbes Cypto Rich List. Dan can be found on Twitter and Medium.
Partner Ian Grigg - Financial cryptographer who's been building cryptographic ledger platforms for 2+ decades. Inventor of the Ricardian Contract and Triple-Entry Accounting.
6. Which consensus mechanism does EOS use and what are Block Producers?
Delegated Proof of Stake (DPOS) with Byzantine Fault Tolerance. Block Producers (BPs) produce the blocks of the blockchain and are elected by token holders that vote for them. BPs will earn block rewards for their service, these block rewards come in the form of EOS tokens produced by token inflation.
“EOS.IO software utilizes the only known decentralized consensus algorithm proven capable of meeting the performance requirements of applications on the blockchain, Delegated Proof of Stake (DPOS). Under this algorithm, those who hold tokens on a blockchain adopting the EOS.IO software may select block producers through a continuous approval voting system. Anyone may choose to participate in block production and will be given an opportunity to produce blocks, provided they can persuade token holders to vote for them. The EOS.IO software enables blocks to be produced exactly every 0.5 second and exactly one producer is authorized to produce a block at any given point in time. If the block is not produced at the scheduled time, then the block for that time slot is skipped. When one or more blocks are skipped, there is a 0.5 or more second gap in the blockchain. Using the EOS.IO software, blocks are produced in rounds of 126 (6 blocks each, times 21 producers). At the start of each round 21 unique block producers are chosen by preference of votes cast by token holders. The selected producers are scheduled in an order agreed upon by 15 or more producers. Byzantine Fault Tolerance is added to traditional DPOS by allowing all producers to sign all blocks so long as no producer signs two blocks with the same timestamp or the same block height. Once 15 producers have signed a block the block is deemed irreversible. Any byzantine producer would have to generate cryptographic evidence of their treason by signing two blocks with the same timestamp or blockheight. Under this model a irreversible consensus should be reachable within 1 second."
7. How does the voting process work?
The voting process will begin once the Block Producer community releases a joint statement ensuring that it is safe to import private keys and vote. Broadly speaking there will be two methods of voting:
Command Line Interface (CLI) tools
EOS Canada has created eosc, a CLI tool that supports Block Producer voting. Other Block Producer candidates such as LibertyBlock are a releasing web portal that will be ready for main net launch. There will be many more options over the coming weeks, please make sure you are always using a service from a trusted entity. Remember: Do not import your private key until you have seen a joint statement released from at least five Block Producers that you trust which states when it is safe to do so. Ignoring this warning could result in tokens lost.
8. What makes EOS a good investment?
Team - EOS is spearheaded by the visionary that brought us the hugely successful Bitshares and Steem - arguably with two projects already under his belt there is no one more accomplished in the space.
Funding - EOS is one of the best funded projects in the space. The block.one team has committed $1B to investing in funds that grow the EOS echo system. EOS VC funds are managed by venture leaders distributed around the world to insure founders in all markets have the ability to work directly with local investors. Incentives such as the EOS hackathon are also in place with $1,500,000 USD in Prizes Across 4 Events.
Community Focus - The team is aware that the a projects success depends almost entirely on its adoption. For this reason there has been a huge push to develop a strong world wide community. There is already a surplus number of block producers that have registered their interest and started to ready themselves for the launch and incentives the EOS hackathon are being used to grow the community. A index of projects using EOS can be found at https://eosindex.io/posts.
Technical Advantages - See point 9!
9. What are the unique selling points of EOS?
Potential to scale to millions of transactions per second
This depends entirely on your definition of working product. If a fully featured developer release meets your definition then yes!. Otherwise the public release will be June 2018.
EOS differs from other projects in that it aims to deliver a fully featured version of the software on launch. The Dawn 3.0 RC1 feature complete pre-release became available on April 5th. This version has all the features of the final release that is due June 2018. Further development will involve preparing the final system contract which implements all of the staking, voting, and governance mechanics. The common notion that there is no viewable code published is wrong and the initial Dawn 1.0 release has been available from September 14th 2017.
11. EOS is an ERC-20 token, how can it possibly be a competitor to other platforms?
The ERC-20 token is used only for raising funds during the token distribution; all tokens will be transferred to the native blockchain once launched.
EOS team has clearly stated their reason for choosing the Ethereum network when they described the rationale behind the ICO model. Specifically, the ICO should be a fair and auditable process, with as little trust required as possible. If you believe that an ICO should be fair, auditable, and trustless, you have no choice but to use a decentralized smart contract blockchain to run the ICO, the largest, and by-far most popular of which is Ethereum. Since EOS is intended to be a major competitor for Ethereum, some have seen this as a hypocritical choice. - Stolen from trogdor on Steam (I couldn’t word it any better myself).
12. Why do the eos.io T&C’s say the ERC-20 token has no value?
The EOS T&C’s famously state:
"The EOS Tokens do not have any rights, uses, purpose, attributes, functionalities or features, express or implied, including, without limitation, any uses, purpose, attributes, functionalities or features on the EOS Platform."
This is legal wording to avoid all the legal complications in this emerging space, block.one do not want to find themselves in a lawsuit as we are seeing with an increasing amount of other ICOs. Most notably Tezos (links below).
This all comes down to legal issues. Anyone who’s been into crypto for 5 minuets knows that government bodies such as the Securities and Exchange Commission (SEC) are now paying attention to crypto in a big way. This legal wording is to avoid all the legal complications in this emerging space, block.one do not want to find themselves in a lawsuit as we are seeing with an increasing amount of other ICOs. Many token creators that launched ICOs are now in deep water for selling unregistered securities.
A filing from the Tezos lawsuit:
"In sum, Defendants capitalized on the recent enthusiasm for blockchain technology and cryptocurrencies to raise funds through the ICO, illegally sold unqualified and unregistered securities, used a Swiss-based entity in an unsuccessful attempt to evade U.S. securities laws, and are now admittedly engaged in the conversion, selling, and possible dissipation of the proceeds that they collected from the Class through their unregistered offering."
To ensure EOS tokens are not classed as a unregistered security block.one has made it clear that they are creating the EOS software only and won’t launching a public blockchain themselves. This task is left down to the community, or more precisely, the Block Producers (BPs). The following disclaimer is seen after posts from block.one:
"block.one is a software company and is producing the EOS.IO software as free, open source software. This software may enable those who deploy it to launch a blockchain or decentralized applications with the features described above. block.one will not be launching a public blockchain based on the EOS.IO software. It will be the sole responsibility of third parties and the community and those who wish to become block producers to implement the features and/or provide the services described above as they see fit. block.one does not guarantee that anyone will implement such features or provide such services or that the EOS.IO software will be adopted and deployed in any way.”
It is expected that many blockchains using eos.io software will emerge. To ensure DAPPs are created on an ecosystem that aligns with the interests of block.one a $1bn fund will be has been created to incentivise projects to use this blockchain.
“A lot of token distributions only allow a small amount of people to participate. The EOS Token distribution structure was created to provide a sufficient period of time for people to participate if they so choose, as well as give people the opportunity to see the development of the EOS.IO Software prior to making a decision to purchase EOS Tokens.”
It is also worth noting that block.one had no knowledge how much the the token distribution would raise as it is determined by the free market and the length of the token distribution is coded into the Ethereum smart contract, which cannot be changed.
14. Where is the money going from the token distribution?
Funding for the project was raised before EOS was announced, the additional money raised from the token distribution is largely going to fund projects on EOS.
A large portion of the money raised is getting put back into the community to incentivise projects using eos.io software. block.one raised all the money they needed to develop the software before the ERC-20 tokens went on sale. There are some conspiracies that block.one are pumping the price of EOS using the funds raised. The good thing about blockchain is you can trace all the transactions, which show nothing of the sort. Not only this but the EOS team are going to have an independent audit after the funding is complete for piece of mind.
From eos.io FAQ:
“block.one intends to engage an independent third party auditor who will release an independent audit report providing further assurances that block.one has not purchased EOS Tokens during the EOS Token distribution period or traded EOS Tokens (including using proceeds from the EOS Token distribution for these purposes). This report will be made available to the public on the eos.io website.”
A more complete list of EOS projects can be found at eosindex.io.
16. Dan left his previous projects, will he leave EOS?
When EOS has been created Dan will move onto creating projects for EOS with block.one.
When a blockchain project has gained momentum and a strong community has formed the project takes on a life of its own and the communities often have ideas that differ from the creators. As we have seen with the Bitcoin and Ethereum hark forks you cant pivot a community too much in a different direction, especially if its changing the fundamentals of the blockchain. Instead of acting like a tyrant Dan has let the communities do what they want and gone a different way. Both the Bitshares and Steem were left in a great position and with Dans help turned out to be two of the most successful blockchain projects to date. Some would argue the most successful projects that are actually useable and have a real use case. What Dan does best is build the architecture and show whats possible. Anyone can then go on to do the upgrades. He is creating EOS to build his future projects upon it. He has stated he loves working at block.one with Brendan and the team and there is far too much momentum behind EOS for him to possibly leave.
No one could have better knowledge on this subject than our Block Producer candidates, I have chosen to look to EOS New York for this answer:
"DDoS'ing a block producing is not as simple as knowing their IP address and hitting "go". We have distributed systems engineers in each of our candidate groups that have worked to defend DDoS systems in their careers. Infrastructure can be built in a way to minimize the exposure of the Block Producing node itself and to prevent a DDoS attack. We haven't published our full architecture yet but let's take a look at fellow candidate EOSphere to see what we mean. As for the launch of the network, we are assuming there will be attacks on the network as we launch. It is being built into the network launch plans. I will reach out to our engineers to get a more detailed answer for you. What also must be considered is that there will be 121 total producing and non-producing nodes on the network. To DDoS all 121 which are located all around the world with different security configurations at the exact same time would be a monumental achievement."
18. If block producers can alter code how do we know they will not do so maliciously?
Block producers are voted in by stake holders.
Changes to the protocol, constitution or other updates are proposed to the community by block producers.
Changes takes 2 to 3 months due to the fact block producers must maintain 15/21 approval for a set amount of time while for changes to be processed.
To ensure bad actors can be identified and expelled the block.one backed community will not back an open-entry system built around anonymous participation.
For this question we must understand the following.
Governance and why it is used.
The process of upgrading the protocol, constitution & other updates.
Dan’s view on open-entry systems built around anonymous participation.
Governance Cryptography can only be used to prove logical consistency. It cannot be used to make subjective judgment calls, determine right or wrong, or even identify truth or falsehood (outside of consistency). We need humans to perform these tasks and therefore we need governance! Governance is the process by which people in a community:
Reach consensus on subjective matters of collective action that cannot be captured entirely by software algorithms;
Carry out the decisions they reach; and
Alter the governance rules themselves via Constitutional amendments.
Embedded into the EOS.IO software is the election of block producers. Before any change can be made to the blockchain these block producers must approve it. If the block producers refuse to make changes desired by the token holders then they can be voted out. If the block producers make changes without permission of the token holders then all other non-producing full-node validators (exchanges, etc) will reject the change.
Upgrade process The EOS.IO software defines the following process by which the protocol, as defined by the canonical source code and its constitution, can be updated:
Block producers propose a change to the constitution and obtains 15/21 approval.
Block producers maintain 15/21 approval of the new constitution for 30 consecutive days.
All users are required to indicate acceptance of the new constitution as a condition of future transactions being processed.
Block producers adopt changes to the source code to reflect the change in the constitution and propose it to the blockchain using the hash of the new constitution.
Block producers maintain 15/21 approval of the new code for 30 consecutive days.
Changes to the code take effect 7 days later, giving all non-producing full nodes 1 week to upgrade after ratification of the source code.
All nodes that do not upgrade to the new code shut down automatically.
By default, configuration of the EOS.IO software, the process of updating the blockchain to add new features takes 2 to 3 months, while updates to fix non-critical bugs that do not require changes to the constitution can take 1 to 2 months.
Open-entry systems built around anonymous participation To ensure bad actors can be identified and expelled the block.one backed community will not back an open-entry system built around anonymous participation. Dan's quote:
"The only way to maintain the integrity of a community is for the community to have control over its own composition. This means that open-entry systems built around anonymous participation will have no means expelling bad actors and will eventually succumb to profit-driven corruption. You cannot use stake as a proxy for goodness whether that stake is held in a bond or a shareholder’s vote. Goodness is subjective and it is up to each community to define what values they hold as good and to actively expel people they hold has bad. The community I want to participate in will expel the rent-seeking vote-buyers and reward those who use their elected broadcasting power for the benefit of all community members rather than special interest groups (such as vote-buyers). I have faith that such a community will be far more competitive in a market competition for mindshare than one that elects vote buyers."
19. What is the most secure way to generate EOS key pairs?
Block producer candidates EOS Cafe and EOS New York have come forward to help the community with this topic. The block producer candidate eosnewyork has kindly posted a tutorial on steemit detailing the steps that need to be taken to generate key pairs using the official code on the EOS.IO Github. The block producer candidate eoscafe has gone a step further and released an Offline EOS Key Generator application complete with GUI for Windows, Linux & Mac. Not only can this application generate key pairs but it can also validate key pairs and resolve public keys from private keys. This application has also been vouched for by EOS New York
I finally figured out how to file my taxes with crypto gains. Using multiple exchanges, Cointracking.info, and Turbotax.
I spent a lot of time gaining information on Reddit so I figured I'd give back by summarizing how I filed my (U.S.) taxes in what I believe was the correct manner. Real quick, I used Cointracking.info (can also use Bitcoin.tax) to calculate my gains/losses using FIFO and generate my tax reports. I filed using the Deluxe desktop version of Turbotax so I can upload my reports. NO I DID NOT HAVE TO MANUALLY ENTER EACH TRANSACTION! But you have to get the CD/Downloaded version of Turbotax. There is no web version that allows you to upload your reports...and don't worry, after calling to confirm this with Turbotax I let them have it. Also, I am not being paid or providing referrals to any of these sites/programs....they worked for me so I figured I'd share for free. I am not a tax or accountant professional so take the following with a grain of salt.... First things first, you have to figure out your "realized" crypto gains. I'm not going to go into detail here as many will ignore the truth that converting from one crypto to another is considered a taxable event. You're an adult, you make your own decisions but honestly I was surprised at how little of my crypto was considered realized once I ran all my numbers. Since I've started crypto, I've used a couple different exchanges and Shapeshift (will never use again after this whole ordeal). Cointracking.info gives instructions on how to import your trades from all the major exchanges. (WARNING: pay attention to the timezones; Coinbase for example shows you the time of transaction in your timezone when you're viewing the site but once you download the data it will switch to Pacific Time, Binance will be in UTC time...etc.) If you don't have the time correct then your FIFO calculations and Short/Long term gains can be off. Shapeshift or any other non-exchange activity can be uploaded using excel or you can manually add individual transactions. Don't forget to review the transactions once you think you have every thing into your coin tracking site as the super nerds who created the algorithms aren't always correct. Ask your self questions like "did I really make a trade at 3:00 am on a work-night" or "does my final balance match what I actually have in my wallets". Once you have all your transactions completed you can download the Form 8949, (keep this for your records in case you're ever audited or more information is requested) and a Turbotax specific file called Capital Gains Report. File your taxes on the DESKTOP version of Turbotax as you normally would. When it asks about income there is a place you can go to upload documents. Upload the Capital Gains Report provided. Turbotax is smart enough to recognize and auto populate all ~75 trades that I had onto a 1099-B. Again, NOT A PROFESSIONAL, I heard something about mailing in your Schedule D/8949 but everything seems to be reported correctly on the 1099-B so this does not appear to be necessary. And that was it! I live in North Carolina and I did not see anywhere where I needed to report any of this information for the state form. If you don't want to believe a stranger on Reddit then you can get the desktop version of the Premier Turbotax which comes with the ability to get free CPA help over the phone. Or you go to your tax professional and have him/her do your taxes but they'll need the Form 8949 provided to you by your coin tracking site. Feel free to let me know if I'm an idiot in the comments as again, I am not a tax professional nor an accountant, just a guy who likes to DIY and learn how/why taxes work to save some money.
https://codevalley.com/whitepaper.pdf This document treats Emergent coding from a philosophical perspective. It has a good introduction, description of the tech and is followed by two sections on justifications from the perspective of Fred Brooks No Silver Bullet criteria and an industrialization criteria.
Mark Fabbro's presentation from the Bitcoin Cash City Conference which outlines the motivation, basic mechanics, and usage of Bitcoin Cash in reproducing the industrial revolution in the software industry.
Building the Bitcoin Cash City presentation highlighting how the emergent coding group of companies fit into the adoption roadmap of North Queensland.
Forging Chain Metal by Paul Chandler CEO of Aptissio, one of startups in the emergent coding space and which secured a million in seed funding last year.
Bitcoin Cash App Exploration A series of Apps that are some of the first to be built by emergent coding and presented, and in the case of Cashbar, demonstrated at the conference.
How does Emergent Coding prevent developer capture? A developer's Agent does not know what project they are contributing to and is thus paid for the specific contribution. The developer is controlling the terms of the payment rather than the alternative, an employer with an employment agreement. Why does Emergent Coding use Bitcoin BCH?
Both emergent coding and Bitcoin BCH are decentralized: As emergent coding is a decentralized development environment consisting of Agents providing respective design services, each contract received by an agent requires a BCH payment. As Agents are hosted by their developer owners which may be residing in one of 150 countries, Bitcoin Cash - an electronic peer-to-peer electronic cash system - is ideal to include a developer regardless of geographic location.
Emergent coding will increase the value of the Bitcoin BCH blockchain: With EC, there are typically many contracts to build an application (Cashbar was designed with 10000 contracts or so). EC adoption will increase the value of the Bitcoin BCH blockchain in line with this influx of quality economic activity.
Emergent coding is being applied to BCH software first: One of the first market verticals being addressed with emergent coding is Bitcoin Cash infrastructure. We are already seeing quality applications created using emergent coding (such as the HULA, Cashbar, PH2, vending, ATMs etc). More apps and tools supporting Bitcoin cash will attract more merchants and business to BCH.
Emergent coding increases productivity: Emergent coding increases developer productivity and reduces duplication compared to other software development methods. Emergent coding can provide BCH devs with an advantage over other coins. A BCH dev productivity advantage will accelerate Bitcoin BCH becoming the first global currency.
Emergent coding produces higher quality binaries: Higher quality software leads to a more reliable network.
1. Who/what is Code Valley? Aptissio? BCH Tech Park? Mining and Server Complex? Code Valley Corp Pty Ltd is the company founded to commercialize emergent coding technology. Code Valley is incorporated in North Queensland, Australia. See https://codevalley.com Aptissio Australia Pty Ltd is a company founded in North Queensland and an early adopter of emergent coding. Aptissio is applying EC to Bitcoin BCH software. See https://www.aptissio.com Townsville Technology Precincts Pty Ltd (TTP) was founded to bring together partners to answer the tender for the Historic North Rail Yard Redevelopment in Townsville, North Queensland. The partners consist of P+I, Conrad Gargett, HF Consulting, and a self-managed superannuation fund(SMSF) with Code Valley Corp Pty Ltd expected to be signed as an anchor tenant. TTP answered a Townsville City Council (TCC) tender with a proposal for a AUD$53m project (stage 1) to turn the yards into a technology park and subsequently won the tender. The plan calls for the bulk of the money is to be raised in the Australian equity markets with the city contributing $28% for remediation of the site and just under 10% from the SMSF. Construction is scheduled to begin in mid 2020 and be competed two years later. Townsville Mining Pty Ltd was set up to develop a Server Complex in the Kennedy Energy Park in North Queensland. The site has undergone several studies as part of a due diligence process with encouraging results for its competitiveness in terms of real estate, power, cooling and data.
TM are presently in negotiations with the owners of the site and is presently operating under an NDA.
The business model calls for leasing "sectors" to mining companies that wish to mine allowing companies to control their own direction.
Since Emergent Coding uses the BCH rail, TM is seeking to contribute to BCH security with an element of domestic mining.
TM are working with American partners to lease one of the sectors to meet that domestic objective.
The site will also host Emergent Coding Agents and Code Valley and its development partners are expected to lease several of these sectors.
TM hopes to have the site operational within 2 years.
2. What programming language are the "software agents" written in. Agents are "built" using emergent coding. You select the features you want your Agent to have and send out the contracts. In a few minutes you are in possession of a binary ELF. You run up your ELF on your own machine and it will peer with the emergent coding and Bitcoin Cash networks. Congratulations, your Agent is now ready to accept its first contract. 3. Who controls these "agents" in a software project You control your own Agents. It is a decentralized development system. 4. What is the software license of these agents. Full EULA here, now. A license gives you the right to create your own Agents and participate in the decentralized development system. We will publish the EULA when we release the product. 5. What kind of software architecture do these agents have. Daemons Responding to API calls ? Background daemons that make remote connection to listening applications? Your Agent is a server that requires you to open a couple of ports so as to peer with both EC and BCH networks. If you run a BCH full node you will be familiar with this process. Your Agent will create a "job" for each contract it receives and is designed to operate thousands of jobs simultaneously in various stages of completion. It is your responsibility to manage your Agent and keep it open for business or risk losing market share to another developer capable of designing the same feature in a more reliable manner (or at better cost, less resource usage, faster design time etc.). For example, there is competition at every classification which is one reason emergent coding is on a fast path for improvement. It is worth reiterating here that Agents are only used in the software design process and do not perform any role in the returned project binary. 6. What is the communication protocol these agents use. The protocol is proprietary and is part of your license. 7. Are the agents patented? Who can use these agents? It is up to you if you want to patent your Agent the underlying innovation behind emergent coding is _feasible_ developer specialization. Emergent coding gives you the ability to contribute to a project without revealing your intellectual property thus creating prospects for repeat business; It renders software patents moot. Who uses your Agents? Your Agents earn you BCH with each design contribution made. It would be wise to have your Agent open for business at all times and encourage everyone to use your design service. 8. Do I need to cooperate with Code Valley company all of the time in order to deploy Emergent Coding on my software projects, or can I do it myself, using documentation? It is a decentralized system. There is no single point of failure. Code Valley intends to defend the emergent coding ecosystem from abuse and bad actors but that role is not on your critical path. 9. Let's say Electron Cash is an Emergent Coding project. I have found a critical bug in the binary. How do I report this bug, what does Jonald Fyookball need to do, assuming the buggy component is a "shared component" puled from EC "repositories"? If you built Electron Cash with emergent coding it will have been created by combining several high level wallet features designed into your project by their respective Agents. Obviously behind the scenes there are many more contracts that these Agents will let and so on. For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto. Should one of these 10,000 make a design error, Jonald only sees the high level Agents he contracted. He can easily pinpoint which of these contractors are in breach. Similarly this contractor can easily pinpoint which of its sub-contractors is in breach and so on. The offender that breached their contract wherever in the project they made their contribution, is easily identified. For example, when my truck has a warranty problem, I do not contact the supplier of the faulty big-end bearing, I simply take it back to Mazda who in turn will locate the fault. Finally "...assuming the buggy component is a 'shared component' puled from EC 'repositories'?" - There are no repositories or "shared component" in emergent coding. 10. What is your licensing/pricing model? Per project? Per developer? Per machine? Your Agent charges for each design contribution it makes (ie per contract). The exact fee is up to you. The resulting software produced by EC is unencumbered. Code Valley's pricing model consists of a seat license but while we are still determining the exact policy, we feel the "Valley" (where Agents advertise their wares) should charge a small fee to help prevent gaming the catalogue and a transaction fee to provide an income in proportion to operations. 11. What is the basic set of applications I need in order to deploy full Emergent Coding in my software project? What is the function of each application? Daemons, clients, APIs, Frontends, GUIs, Operating systems, Databases, NoSQLs? A lot of details, please. There's just one. You buy a license and are issued with our product called Pilot. You run Pilot (node) up on your machine and it will peer with the EC and BCH networks. You connect your browser to Pilot typically via localhost and you're in business. You can build software (including special kinds of software like Agents) by simply combining available features. Pilot allows you to specify the desired features and will manage the contracts and decentralized build process. It also gives you access to the "Valley" which is a decentralized advertising site that contains all the "business cards" of each Agent in the community, classified into categories for easy search. If we are to make a step change in software design, inventing yet another HLL will not cut it. As Fred Brooks puts it, an essential change is needed. 12. How can I trust a binary when I can not see the source? The Emergent Coding development model is very different to what you are use to. There are ways of arriving at a binary without Source code. The Agents in emergent coding design their feature into your project without writing code. We can see the features we select but can not demonstrate the source as the design process doesn't use a HLL. The trust model is also different. The bulk of the testing happens _before_ the project is designed not _after_. Emergent Coding produces a binary with very high integrity and arguably far more testing is done in emergent coding than in incumbent methods you are used to. In emergent coding, your reputation is built upon the performance of your Agent. If your Agent produces substandard features, you are simply creating an opportunity for a competitor to increase their market share at your expense. Here are some points worth noting regarding bad actor Agents:
An Agent is a specialist and in emergent coding is unaware of the project they are contributing to. If you are a bad actor, do you compromise every contract you receive? Some? None?
Your client is relying on the quality of your contribution to maintain their own reputation. Long before any client will trust your contributions, they will have tested you to ensure the quality is at their required level. You have to be at the top of your game in your classification to even win business. This isn't some shmuck pulling your routine from a library.
Each contract to your agent is provisioned. Ie you advertise in advance what collaborations you require to complete your design. There is no opportunity for a "sign a Bitcoin transaction" Agent to be requesting "send an HTTP request" collaborations.
Your Agent never gets to modify code, it makes a design contribution rather than a code contribution. There is no opportunity to inject anything as the mechanism that causes the code to emerge is a higher order complexity of all Agent involvement.
There is near perfect accountability in emergent coding. You are being contracted and paid to do the design. Every project you compromise has an arrow pointed straight at you should it be detected even years later.
Security is a whole other ball game in emergent coding and current rules do not necessarily apply. 13. Every time someone rebuilds their application, do they have to pay over again for all "design contributions"? (Or is the ability to license components at fixed single price for at least a limited period or even perpetually, supported by the construction (agent) process?) You are paying for the design. Every time you build (or rebuild) an application, you pay the developers involved. They do not know they are "rebuilding". This sounds dire but its costs far less than you think and there are many advantages. Automation is very high with emergent coding so software design is completed for a fraction of the cost of incumbent design methods. You could perhaps rebuild many time before matching incumbent methods. Adding features is hard with incumbent methods "..very few late-stage additions are required before the code base transforms from the familiar to a veritable monster of missed schedules, blown budgets and flawed products" (Brooks Jr 1987) whereas with emergent coding adding a late stage feature requires a rebuild and hence seamless integration. With Emergent Coding, you can add an unlimited number of features without risking the codebase as there isn't one. The second part of your question incorrectly assumes software is created from licensed components rather than created by paying Agents to design features into your project without any licenses involved. 14. In this construction process, is the vendor of a particular "design contribution" able to charge differential rates per their own choosing? e.g. if I wanted to charge a super-low rate to someone from a 3rd world country versus charging slightly more when someone a global multinational corporation wants to license my feature? Yes. Developers set the price and policy of their Agent's service. The Valley (where your Agent is presently advertised) presently only supports a simple price policy. The second part of your question incorrectly assumes features are encumbered with licenses. A developer can provide their feature without revealing their intellectual property. A client has the right to reuse a developer's feature in another project but will find it uneconomical to do so. 15. Is "entirely free" a supported option during the contract negotiation for a feature? Yes. You set the price of your Agent. 16. "There is no single point of failure." Right now, it seems one needs to register, license the construction tech etc. Is that going to change to a model where your company is not necessarily in that loop? If not, don't you think that's a single point of failure? It is a decentralized development system. Once you have registered you become part of a peer-to-peer system. Code Valley has thought long and hard about its role and has chosen the reddit model. It will set some rules for your participation and will detect or remove bad actors. If, in your view, Code Valley becomes a bad actor, you have control over your Agent, private keys and IP, you can leave the system at any time. 17. What if I can't obtain a license because of some or other jurisdictional problem? Are you allowed to license the technology to anywhere in the world or just where your government allows it? We are planning to operate in all 150 countries. As ec is peer-to-peer, Code Valley does not need to register as a digital currency exchange or the like. Only those countries banning BCH will miss out (until such times as BCH becomes the first global electronic cash system). 18.
For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto.
It seems already a reasonably complex application, so well done in having that as a demo. Thank you. 19. I asked someone else a question about how it would be possible to verify whether an application (let's say one received a binary executable) has been built with your system of emergent consensus. Is this possible? Yes of course. If you used ec to build an application, you can sign it and claim anything you like. Your client knows it came from you because of your signature. The design contributions making up the application are not signed but surprisingly there is still perfect accountability (see below). 20. I know it is possible to identify for example all source files and other metadata (like build environment) that went into constructing a binary, by storing this data inside an executable. All metadata emergent coding is now stored offline. When your Agent completes a job, you have a log of the design agreements you made with your peers etc., as part of the log. If you are challenged at a later date for breaching a design contract, you can pull your logs to see what decisions you made, what sub-contracts were let etc. As every Agent has their own logs, the community as a whole has a completely trustless log of each project undertaken. 21. Is this being done with EC build products and would it allow the recipient to validate that what they've been provided has been built only using "design contributions" cryptographically signed by their providers and nothing else (i.e. no code that somehow crept in that isn't covered by the contracting process)? The emergent coding trust model is very effective and has been proven in other industries. Remember, your Agent creates a feature in my project by actually combining smaller features contracted from other Agents, thus your reputation is linked to that of your suppliers. If Bosch makes a faulty relay in my Ford, I blame Ford for a faulty car not Bosch when my headlights don't work. Similarly, you must choose and vet your sub-contractors to the level of quality that you yourself want to project. Once these relationships are set up, it becomes virtually impossible for a bad actor to participate in the system for long or even from the get go. 22. A look at code generated and a surprising answer to why is every intermediate variable spilled? Thanks to u/R_Sholes, this snippet from the actual code for: number = number * 10 + digitgenerated as a part of: sub read/integeboolean($, 0, 100) -> guess
; copy global to local temp variable 0x004032f2 movabs r15, global.current_digit 0x004032fc mov r15, qword [r15] 0x004032ff mov rax, qword [r15] 0x00403302 movabs rdi, local.digit 0x0040330c mov qword [rdi], rax ; copy global to local temp variable 0x0040330f movabs r15, global.guess 0x00403319 mov r15, qword [r15] 0x0040331c mov rax, qword [r15] 0x0040331f movabs rdi, local.num 0x00403329 mov qword [rdi], rax ; multiply local variable by constant, uses new temp variable for output 0x0040332c movabs r15, local.num 0x00403336 mov rax, qword [r15] 0x00403339 movabs rbx, 10 0x00403343 mul rbx 0x00403346 movabs rdi, local.num_times_10 0x00403350 mov qword [rdi], rax ; add local variables, uses yet another new temp variable for output 0x00403353 movabs r15, local.num_times_10 0x0040335d mov rax, qword [r15] 0x00403360 movabs r15, local.digit 0x0040336a mov rbx, qword [r15] 0x0040336d add rax, rbx 0x00403370 movabs rdi, local.num_times_10_plus_digit 0x0040337a mov qword [rdi], rax ; copy local temp variable back to global 0x0040337d movabs r15, local.num_times_10_plus_digit 0x00403387 mov rax, qword [r15] 0x0040338a movabs r15, global.guess 0x00403394 mov rdi, qword [r15] 0x00403397 mov qword [rdi], rax For comparison, an equivalent snippet in C compiled by clang without optimizations gives this output: imul rax, qword ptr [guess], 10 add rax, qword ptr [digit] mov qword ptr [guess], rax
Collaborations at the byte layer of Agents result in designs that spill every intermediate variable. Firstly, why this is so? Agents from this early version only support one catch-all variable design when collaborating. Similar to a compiler when all registers contain variables, the compiler must make a decision to spill a register temporarily to main memory. The compiler would still work if it spilled every variable to main memory but would produce code that would be, as above, hopelessly inefficient. However, by only supporting the catch-all portion of the protocol, the code valley designers were able to design, build and deploy these agents faster because an Agent needs fewer predicates in order to participate in these simpler collaborations. The protocol involved however, can have many "Policies" besides the catch-all default policy (Agents can collaborate over variables designed to be on the stack, or, as is common for intermediate variables, designed to use a CPU register, and so forth). This example highlights one of the very exciting aspects of emergent coding. If we now add a handful of additional predicates to a handful of these byte layer agents, henceforth ALL project binaries will be 10x smaller and 10x faster. Finally, there can be many Agents competing for market share at each of classification. If these "gumby" agents do not improve, you can create a "smarter" competitor (ie with more predicates) and win business away from them. Candy from a baby. Competition means the smartest agents bubble to the top of every classification and puts the entire emergent coding platform on a fast path for improvement. Contrast this with incumbent libraries which does not have a financial incentive to improve. Just wait until you get to see our production system. 23. How hard can an ADD Agent be? Typically an Agent's feature is created by combining smaller features from other Agents. The smallest features are so devoid of context and complexity they can be rendered by designing a handful of bytes in the project binary. Below is a description of one of these "byte" layer Agents to give you an idea how they work. An "Addition" Agent creates the feature of "adding two numbers" in your project (This is an actual Agent). That is, it contributes to the project design a feature such that when the project binary is delivered, there will be an addition instruction somewhere in it that was designed by the contract that was let to this Agent. If you were this Agent, for each contract you received, you would need to collaborate with peers in the project to resolve vital requirements before you can proceed to design your binary "instruction". Each paid contract your Agent receives will need to participate in at least 4 collaborations within the design project. These are:
Input A collaboration
Input B collaboration
Construction site collaboration
You can see from the collaborations involved how your Agent can determine the precise details needed to design its instruction. As part of the contract, the Addition Agent will be provisioned with contact details so it can join these collaborations. Your Agent must collaborate with other stakeholders in each collaboration to resolve that requirement. In this case, how a variable will be treated. The stakeholders use a protocol to arrive at an Agreement and share the terms of the agreement. For example, the stakeholders of collaboration “Input A” may agree to treat the variable as an signed 64bit integer, resolve to locate it at location 0x4fff2, or alternatively agree that the RBX register should be used, or agree to use one of the many other ways a variable can be represented. Once each collaboration has reached an agreement and the terms of that agreement distributed, your Agent can begin to design the binary instruction. The construction site collaboration is where you will exactly place your binary bytes. The construction site protocol is detailed in the whitepaper and is some of the magic that allows the decentralized development system to deliver the project binary. The protocol consists of 3 steps,
You request space in the project binary be reserved.
You are notified of the physical address of your requested space.
You delver the the binary bytes you designed to fill the reserved space.
Once the bytes are returned your Agent can remove the job from its work schedule. Job done, payment received, another happy customer with a shiny ADD instruction designed into their project binary. Note:
Observe how it is impossible for this ADD Agent to install a backdoor undetected by the client.
Observe how the Agent isn’t linking a module, or using a HLL to express the binary instruction.
Observe how with just a handful of predicates you have a working "Addition" Agent capable of designing the Addition Feature into a project with a wide range of collaboration agreements.
Observe how this Agent could conceivably not even design-in an ADD instruction if one of the design time collaboration agreements was a literal "1" (It would design in an increment instruction). There is even a case where this Agent may not deliver any binary to build its feature into your project!
24. How does EC arrive at a project binary without writing source code? Devs using EC combine features to create solutions. They don't write code. EC devs contract Agents which design the desired features into their project for a fee. Emergent coding uses a domain specific contracting language (called pilot) to describe the necessary contracts. Pilot is not a general purpose language. As agents create their features by similarly combining smaller features contracted from peer, your desired features may inadvertently result in thousands of contracts. As it is agents all the way down, there is no source code to create the project binary. Traditional: Software requirements -> write code -> compile -> project binary (ELF). Emergent coding: Select desired features -> contract agents -> project binary (ELF). Agents themselves are created the same way - specify the features you want your agent to have, contract the necessary agents for those features and viola - agent project binary (ELF). 25. How does the actual binary code that agents deliver to each other is written? An agent never touches code. With emergent coding, agents contribute features to a project, and leave the project binary to emerge as the higher-order complexity of their collective effort. Typically, agents “contribute” their feature by causing smaller features to be contributed by peers, who in turn, do likewise. By mapping features to smaller features delivered by these peers, agents ensure their feature is delivered to the project without themselves making a direct code contribution. Peer connections established by these mappings serve to both incrementally extend a temporary project “scaffold” and defer the need to render a feature as a code contribution. At the periphery of the scaffold, features are so simple they can be rendered as a binary fragment with these binary fragments using the information embodied by the scaffold to guide the concatenation back along the scaffold to emerge as the project binary - hence the term Emergent Coding. Note the scaffold forms a temporary tree-like structure which allows virtually all the project design contracts to be completed in parallel. The scaffold also automatically limits an agent's scope to precisely the resources and site for their feature. It is why it is virtually impossible for an agent to install a "back door" or other malicious code into the project binary.
For both payment options the user needs to follow the same instructions, namely: 1. Place an order, determining the amount of bitcoins he is willing to purchase, provide a valid Bitcoin wallet address and select one of the two payment options. 2. Visit a Western Union or MoneyGram office and transfer the amount that is mentioned in the order. (He can view the payment details after he places ... From the Bitcoin.com Wallet. Open the Bitcoin.com wallet app on your device. Tap the green "Buy" button if you want to buy Bitcoin Cash (BCH), or tap the orange "Buy" button if you want to buy Bitcoin (BTC) Follow on-screen instructions to deposit coins to your preferred wallet; You will be prompted to verify your identity; Once complete, your purchase will proceed; After your first purchase ... Mining pools, ports, wallet update instructions, source code, explorer, and certifications. Learn More; For Professionals. We aim to create an entire ecosystem for mass adoption for professional applications and services. Learn More; Blockchain TechnologyHTMLCOIN Features. DoubleSHA-256. Used by Bitcoin and other established blockchains to provide a secure and proven cryptographic wrap. Real ... Step 14: Enter the amount of Bitcoin you want to buy and buy bitcoin. Step 15: Go to portfolio, select bitcoin, then hit send bitcoin. Step 16: Paste bitcoin wallet address from your broker from Step 7. Step 17: Type the amount you want & click send Step 18: Go back to broker. Step 19: Go to your personal settings and upload verification documents Any address in a Bitcoin wallet that contained any value at the time of the fork will be eligible for forkcoin rewards. A Bitcoin address that received value after the fork won’t be eligible for any forkcoins. Reward Ratio. Forkcoins are often awarded in direct proportion to the amount of bitcoin in each address (e.g. 1.582 forkcoins for 1.582 BTC) but this ratio can vary. For example ...
Bitcoin Wallet Hack How to get Bitcoins Brute force 2020
How to setup a bitcoin wallet, create a bitcoin wallet, how to create a crypto wallet, trezor wallet setup guide, how to setup trezor model t wallet, trezor instructions, trezor wallet setup guide ... Here is how to make a bitcoin wallet and some websites for first bitcoins!!! BITCOIN WALLET: http://www.multibit.org FIRST BITCOINS http://www.coinreaper.com... A brief walkthrough aimed at first time Ethereum users.. Full written and illustrated instructions available here - http://rizzn.com/2016/07/22/ethereum-mist/ Bitcoin Wallet Hack How to get Bitcoins Brute force 2020 How can I avoid being so gullible and easily deceived? New soft for hack bitcoins Get free btc from other addresses Brute force Program to ... Wasabi Wallet - короткий обзор одного из лучших анонимных Bitcoin кошельков. - Duration: 3:59. Канал Biglan Recommended for you