ssh - How to limit the number of active logins per user ...

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Full tutorial for setting up a hidden service store

Hello everybody! There are a lot of vendors which reputation is very high and may be trusted for direct orders. If they do not want to rely only on third parties markets and be dependant to their downtime, death, exit scam etc. with this tutorial they will be able to easily setup a private store (and harden it a bit).
Advantages:
Disadvantages:
This tutorial will guide you with the entire procedure, from buying a server to setting up Anonymart. This tutorial assumes that you will start with a freshly installed Debian 7. Other setup and software may interfere with my setup script, so if you are unsure read the source code.

Buying the server

This is probably the hardest part. You should look for a provider who accept Bitcoin and that has not strict practices on verifying customers identities.
One of the best resources for finding out such providers is:
https://www.exoticvps.com/
While there are some providers like vultr.com which will not ask for personal details and will not complain about tor, I'd suggest to avoid such large scale companies (especially if based in the US). For example, if we assume the scenario where everybody choose Vultr because it's the easier place to obtain a server, LE may force Vultr to monitor all instances which generate tor traffic without being a a tor node. After that they may cause some seconds of downtime each and compare the result to the availability of the store. The whole point of this tutorial is to decentralize, and you really should think always about that.
On most providers you can't order via Tor with obviously fake credentials because all of them use MaxMind fraud prevention which will blacklist all orders done via Tor, VPN or anonymous proxies.
First of all install proxychains on your torified system. You can install it in Tails and debian based distributions with
sudo apt-get install proxychains
(on Whonix this step is not required)
Now, in order to place an order which seems legit to fraud prevention we need a clean ip from a residential connection. This is what Socks Proxies exist for so you should buy some at Vip72 (or obviously any other provider). The demo cost 3$ and you can pay with Bitcoin via Tor.
After your payment has been verified you should be able to browse Socks Proxies by their Country/Region.
Select one and test it via proxychains. Proxychains is useful because, as the name says, it can chain proxy, so you can connect to the specified set of proxy you want via tor.
Here's the default configuration:
[ProxyList] # add proxy here ... # meanwile # defaults set to "tor" socks4 127.0.0.1 9050 
Now add the selected proxy to the conf:
sudo nano /etc/proxychains.conf
[ProxyList] # add proxy here ... # meanwile # defaults set to "tor" socks4 127.0.0.1 9050 socks5   
Now open a browser using proxychains:
proxychains chromium
or
proxychains firefox
Keep in mind that this should not be done with tor-browser because it's iser agents and other specifics are detected by the anti fraud system.
If the socks proxy is working you should be able to browse the internet. If nothing loads, just get another socks and change the proxychains configuration.
Now go to http://www.fakenamegenerator.com/ and get something which will match your proxy and seems to be believable.
Choose your provider and try to order depending on which location you prefer and how much money you wish to spend. Keep in mind that this tutorial is aimed to full system, so if you are not ordering a dedicated server but a VPS you should select a full virtualized one (KVM, vmware, XEN-HVM). Unless you're expecting a huge load, 512MB of RAM and 10GB oh storage should be enough.
Your provider will send you an email with information to access to you control panel from where you will be able to install the operating system. This tutorial is specifically for Debian 7 x64 (x86 is ok too), but if you know what you are doing you can obviously

Basic server setup

First of all you have to generate a ssh key if you already don't have one.
ssh-keygen -t ecdsa
With that command we are generating a 256 bits ECDSA key.
If you left the dafult options you should be able to get the public key using:
cat .ssh/id_ecdsa.pub
Now login to your newly installed server. The panel should have generally asked you to provide a root password or sent via email a random generated one. Since here we're assuming that you are on Tails, Whonix or any othe system which force all connections trough tor. In particular, if you are on Tails, you should enable SSH keys persistence. If you continue on the tutorial skipping this part, you will loose your keys and the access to the server as soon as you shutdown your computer.
ssh [email protected]
Answer yes to the first question.
Now the last step:
git clone https://github.com/anonymart/anonymart.git /vawww/anonymart
sh /vawww/anonymart/bin/full_setup.sh
The installation script will update the system, remove useless packages, install the required ones, configure a nginx+php-fpm+mysql stack, configure tor, configure iptables and much more. If everything goes smoothly at the end it should tell you an onion address which will be the the url of your store and an onion address which you will use to connect via ssh to the server instead of the original ip.

Configure anonymart

Now go to your new url. You will be redirected to /settings/create where you will create the basic settings for yout store. Choose a very strong password. Bitcoin address for payments will be generated using your Electrum master key (which can't be used to spend the coins) using BIP32.

Final

I've already coded a small script where vendors may enter their onion url signed with their GPG key. The script will look up on Grams for that GPG key and match the vendor to the url and add it to a public database. If some stores start to popup, i will make it available as a hidden service.
Donations: 12xjgV2sUSMrPAeFHj3r2sgV6wSjt2QMBP

Some notes on anonymart

The original developer of anonymart has decided to abandon the project because interested in something else. I was already collaborating with him before that decision so he decided to pass to me the lead of it. I've reviewed part of the code and i haven't seen security issues, but this doesn't mean it's 100% secure. I'll do my best to review it all and add some missing features like:
  • Two factor authentication
  • Switch from blockchain.info api to lookup on Electrum stratum servers
  • Add the possibility to add more than one image per product
  • Change the order id from incremental to a random one
  • Add JSON api to list store products to facilitate third parties search engines
Unfortunately I'm not very familiar with laravel yet, so before messing with the code I'd need some times, so don't expect huge updates soon.
submitted by spike25 to DeepDotWeb [link] [comments]

Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015

Wladimir on Feb 16 2015:
Bitcoin Core version 0.10.0 is now available from:
https://bitcoin.org/bin/0.10.0/
This is a new major version release, bringing both new features and
bug fixes.
Please report bugs using the issue tracker at github:
https://github.com/bitcoin/bitcoin/issues
The whole distribution is also available as torrent:
https://bitcoin.org/bin/0.10.0/bitcoin-0.10.0.torrent
magnet:?xt=urn:btih:170c61fe09dafecfbb97cb4dccd32173383f4e68&dn;=0.10.0&tr;=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr;=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr;=udp%3A%2F%2Fopen.demonii.com%3A1337&ws;=https%3A%2F%2Fbitcoin.org%2Fbin%2F
Upgrading and downgrading

How to Upgrade
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or
bitcoind/bitcoin-qt (on Linux).
Downgrading warning
Because release 0.10.0 makes use of headers-first synchronization and parallel
block download (see further), the block files and databases are not
backwards-compatible with older versions of Bitcoin Core or other software:
  • Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
  • The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support.
If you want to be able to downgrade smoothly, make a backup of your entire data
directory. Without this your node will need start syncing (or importing from
bootstrap.dat) anew afterwards. It is possible that the data from a completely
synchronised 0.10 node may be usable in older versions as-is, but this is not
supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility.
Notable changes

Faster synchronization
Bitcoin Core now uses 'headers-first synchronization'. This means that we first
ask peers for block headers (a total of 27 megabytes, as of December 2014) and
validate those. In a second stage, when the headers have been discovered, we
download the blocks. However, as we already know about the whole chain in
advance, the blocks can be downloaded in parallel from all available peers.
In practice, this means a much faster and more robust synchronization. On
recent hardware with a decent network link, it can be as little as 3 hours
for an initial full synchronization. You may notice a slower progress in the
very first few minutes, when headers are still being fetched and verified, but
it should gain speed afterwards.
A few RPCs were added/updated as a result of this:
  • getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
  • getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we
have requested from peers (but haven't received yet) are also listed as
'inflight'.
  • A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for.
Transaction fee changes
This release automatically estimates how high a transaction fee (or how
high a priority) transactions require to be confirmed quickly. The default
settings will create transactions that confirm quickly; see the new
'txconfirmtarget' setting to control the tradeoff between fees and
confirmation times. Fees are added by default unless the 'sendfreetransactions'
setting is enabled.
Prior releases used hard-coded fees (and priorities), and would
sometimes create transactions that took a very long time to confirm.
Statistics used to estimate fees and priorities are saved in the
data directory in the fee_estimates.dat file just before
program shutdown, and are read in at startup.
New command line options for transaction fee changes:
  • -txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting
is over-ridden by the -paytxfee option.
  • -sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0)
New RPC commands for fee estimation:
  • estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough
transactions have been observed to compute a good estimate.
  • estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not
enough free transactions have been observed to compute a good
estimate.
RPC access control changes
Subnet matching for the purpose of access control is now done
by matching the binary network address, instead of with string wildcard matching.
For the user this means that -rpcallowip takes a subnet specification, which can be
  • a single IP address (e.g. 1.2.3.4 or fe80::0012:3456:789a:bcde)
  • a network/CIDR (e.g. 1.2.3.0/24 or fe80::0000/64)
  • a network/netmask (e.g. 1.2.3.4/255.255.255.0 or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address
matches one of them.
For example:
| 0.9.x and before | 0.10.x |
|--------------------------------------------|---------------------------------------|
| -rpcallowip=192.168.1.1 | -rpcallowip=192.168.1.1 (unchanged) |
| -rpcallowip=192.168.1.* | -rpcallowip=192.168.1.0/24 |
| -rpcallowip=192.168.* | -rpcallowip=192.168.0.0/16 |
| -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) |
Using wildcards will result in the rule being rejected with the following error in debug.log:
 Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g. 1.2.3.4), a network/netmask (e.g. 1.2.3.4/255.255.255.0) or a network/CIDR (e.g. 1.2.3.4/24). 
REST interface
A new HTTP API is exposed when running with the -rest flag, which allows
unauthenticated access to public node data.
It is served on the same port as RPC, but does not need a password, and uses
plain HTTP instead of JSON-RPC.
Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded
binary) or json.
For more details, see the doc/REST-interface.md document in the repository.
RPC Server "Warm-Up" Mode
The RPC server is started earlier now, before most of the expensive
intialisations like loading the block index. It is available now almost
immediately after starting the process. However, until all initialisations
are done, it always returns an immediate error with code -28 to all calls.
This new behaviour can be useful for clients to know that a server is already
started and will be available soon (for instance, so that they do not
have to start it themselves).
Improved signing security
For 0.10 the security of signing against unusual attacks has been
improved by making the signatures constant time and deterministic.
This change is a result of switching signing to use libsecp256k1
instead of OpenSSL. Libsecp256k1 is a cryptographic library
optimized for the curve Bitcoin uses which was created by Bitcoin
Core developer Pieter Wuille.
There exist attacks[1] against most ECC implementations where an
attacker on shared virtual machine hardware could extract a private
key if they could cause a target to sign using the same key hundreds
of times. While using shared hosts and reusing keys are inadvisable
for other reasons, it's a better practice to avoid the exposure.
OpenSSL has code in their source repository for derandomization
and reduction in timing leaks that we've eagerly wanted to use for a
long time, but this functionality has still not made its
way into a released version of OpenSSL. Libsecp256k1 achieves
significantly stronger protection: As far as we're aware this is
the only deployed implementation of constant time signing for
the curve Bitcoin uses and we have reason to believe that
libsecp256k1 is better tested and more thoroughly reviewed
than the implementation in OpenSSL.
[1] https://eprint.iacr.org/2014/161.pdf
Watch-only wallet support
The wallet can now track transactions to and from wallets for which you know
all addresses (or scripts), even without the private keys.
This can be used to track payments without needing the private keys online on a
possibly vulnerable system. In addition, it can help for (manual) construction
of multisig transactions where you are only one of the signers.
One new RPC, importaddress, is added which functions similarly to
importprivkey, but instead takes an address or script (in hexadecimal) as
argument. After using it, outputs credited to this address or script are
considered to be received, and transactions consuming these outputs will be
considered to be sent.
The following RPCs have optional support for watch-only:
getbalance, listreceivedbyaddress, listreceivedbyaccount,
listtransactions, listaccounts, listsinceblock, gettransaction. See the
RPC documentation for those methods for more information.
Compared to using getrawtransaction, this mechanism does not require
-txindex, scales better, integrates better with the wallet, and is compatible
with future block chain pruning functionality. It does mean that all relevant
addresses need to added to the wallet before the payment, though.
Consensus library
Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library.
The purpose of this library is to make the verification functionality that is
critical to Bitcoin's consensus available to other applications, e.g. to language
bindings such as [python-bitcoinlib](https://pypi.python.org/pypi/python-bitcoinlib) or
alternative node implementations.
This library is called libbitcoinconsensus.so (or, .dll for Windows).
Its interface is defined in the C header [bitcoinconsensus.h](https://github.com/bitcoin/bitcoin/blob/0.10/src/script/bitcoinconsensus.h).
In its initial version the API includes two functions:
  • bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
  • bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface
for existing methods should remain stable.
Standard script rules relaxed for P2SH addresses
The IsStandard() rules have been almost completely removed for P2SH
redemption scripts, allowing applications to make use of any valid
script type, such as "n-of-m OR y", hash-locked oracle addresses, etc.
While the Bitcoin protocol has always supported these types of script,
actually using them on mainnet has been previously inconvenient as
standard Bitcoin Core nodes wouldn't relay them to miners, nor would
most miners include them in blocks they mined.
bitcoin-tx
It has been observed that many of the RPC functions offered by bitcoind are
"pure functions", and operate independently of the bitcoind wallet. This
included many of the RPC "raw transaction" API functions, such as
createrawtransaction.
bitcoin-tx is a newly introduced command line utility designed to enable easy
manipulation of bitcoin transactions. A summary of its operation may be
obtained via "bitcoin-tx --help" Transactions may be created or signed in a
manner similar to the RPC raw tx API. Transactions may be updated, deleting
inputs or outputs, or appending new inputs and outputs. Custom scripts may be
easily composed using a simple text notation, borrowed from the bitcoin test
suite.
This tool may be used for experimenting with new transaction types, signing
multi-party transactions, and many other uses. Long term, the goal is to
deprecate and remove "pure function" RPC API calls, as those do not require a
server round-trip to execute.
Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making
key and script operations easily accessible via command line.
Mining and relay policy enhancements
Bitcoin Core's block templates are now for version 3 blocks only, and any mining
software relying on its getblocktemplate must be updated in parallel to use
libblkmaker either version 0.4.2 or any version from 0.5.1 onward.
If you are solo mining, this will affect you the moment you upgrade Bitcoin
Core, which must be done prior to BIP66 achieving its 951/1001 status.
If you are mining with the stratum mining protocol: this does not affect you.
If you are mining with the getblocktemplate protocol to a pool: this will affect
you at the pool operator's discretion, which must be no later than BIP66
achieving its 951/1001 status.
The prioritisetransaction RPC method has been added to enable miners to
manipulate the priority of transactions on an individual basis.
Bitcoin Core now supports BIP 22 long polling, so mining software can be
notified immediately of new templates rather than having to poll periodically.
Support for BIP 23 block proposals is now available in Bitcoin Core's
getblocktemplate method. This enables miners to check the basic validity of
their next block before expending work on it, reducing risks of accidental
hardforks or mining invalid blocks.
Two new options to control mining policy:
  • -datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
  • -datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs.
The relay policy has changed to more properly implement the desired behavior of not
relaying free (or very low fee) transactions unless they have a priority above the
AllowFreeThreshold(), in which case they are relayed subject to the rate limiter.
BIP 66: strict DER encoding for signatures
Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new
consensus rule, which prohibits non-DER signatures. Such transactions have been
non-standard since Bitcoin v0.8.0 (released in February 2013), but were
technically still permitted inside blocks.
This change breaks the dependency on OpenSSL's signature parsing, and is
required if implementations would want to remove all of OpenSSL from the
consensus code.
The same miner-voting mechanism as in BIP 34 is used: when 751 out of a
sequence of 1001 blocks have version number 3 or higher, the new consensus
rule becomes active for those blocks. When 951 out of a sequence of 1001
blocks have version number 3 or higher, it becomes mandatory for all blocks.
Backward compatibility with current mining software is NOT provided, thus miners
should read the first paragraph of "Mining and relay policy enhancements" above.
0.10.0 Change log

Detailed release notes follow. This overview includes changes that affect external
behavior, not code moves, refactors or string updates.
RPC:
  • f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
  • b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
  • 171ca77 estimatefee / estimatepriority RPC methods
  • b750cf1 Remove cli functionality from bitcoind
  • f6984e8 Add "chain" to getmininginfo, improve help in getblockchaininfo
  • 99ddc6c Add nLocalServices info to RPC getinfo
  • cf0c47b Remove getwork() RPC call
  • 2a72d45 prioritisetransaction
  • e44fea5 Add an option -datacarrier to allow users to disable relaying/mining data carrier transactions
  • 2ec5a3d Prevent easy RPC memory exhaustion attack
  • d4640d7 Added argument to getbalance to include watchonly addresses and fixed errors in balance calculation
  • 83f3543 Added argument to listaccounts to include watchonly addresses
  • 952877e Showing 'involvesWatchonly' property for transactions returned by 'listtransactions' and 'listsinceblock'. It is only appended when the transaction involves a watchonly address
  • d7d5d23 Added argument to listtransactions and listsinceblock to include watchonly addresses
  • f87ba3d added includeWatchonly argument to 'gettransaction' because it affects balance calculation
  • 0fa2f88 added includedWatchonly argument to listreceivedbyaddress/...account
  • 6c37f7f getrawchangeaddress: fail when keypool exhausted and wallet locked
  • ff6a7af getblocktemplate: longpolling support
  • c4a321f Add peerid to getpeerinfo to allow correlation with the logs
  • 1b4568c Add vout to ListTransactions output
  • b33bd7a Implement "getchaintips" RPC command to monitor blockchain forks
  • 733177e Remove size limit in RPC client, keep it in server
  • 6b5b7cb Categorize rpc help overview
  • 6f2c26a Closely track mempool byte total. Add "getmempoolinfo" RPC
  • aa82795 Add detailed network info to getnetworkinfo RPC
  • 01094bd Don't reveal whether password is <20 or >20 characters in RPC
  • 57153d4 rpc: Compute number of confirmations of a block from block height
  • ff36cbe getnetworkinfo: export local node's client sub-version string
  • d14d7de SanitizeString: allow '(' and ')'
  • 31d6390 Fixed setaccount accepting foreign address
  • b5ec5fe update getnetworkinfo help with subversion
  • ad6e601 RPC additions after headers-first
  • 33dfbf5 rpc: Fix leveldb iterator leak, and flush before gettxoutsetinfo
  • 2aa6329 Enable customising node policy for datacarrier data size with a -datacarriersize option
  • f877aaa submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock
  • e69a587 submitblock: Support for returning specific rejection reasons
  • af82884 Add "warmup mode" for RPC server
  • e2655e0 Add unauthenticated HTTP REST interface to public blockchain data
  • 683dc40 Disable SSLv3 (in favor of TLS) for the RPC client and server
  • 44b4c0d signrawtransaction: validate private key
  • 9765a50 Implement BIP 23 Block Proposal
  • f9de17e Add warning comment to getinfo
Command-line options:
  • ee21912 Use netmasks instead of wildcards for IP address matching
  • deb3572 Add -rpcbind option to allow binding RPC port on a specific interface
  • 96b733e Add -version option to get just the version
  • 1569353 Add -stopafterblockimport option
  • 77cbd46 Let -zapwallettxes recover transaction meta data
  • 1c750db remove -tor compatibility code (only allow -onion)
  • 4aaa017 rework help messages for fee-related options
  • 4278b1d Clarify error message when invalid -rpcallowip
  • 6b407e4 -datadir is now allowed in config files
  • bdd5b58 Add option -sysperms to disable 077 umask (create new files with system default umask)
  • cbe39a3 Add "bitcoin-tx" command line utility and supporting modules
  • dbca89b Trigger -alertnotify if network is upgrading without you
  • ad96e7c Make -reindex cope with out-of-order blocks
  • 16d5194 Skip reindexed blocks individually
  • ec01243 --tracerpc option for regression tests
  • f654f00 Change -genproclimit default to 1
  • 3c77714 Make -proxy set all network types, avoiding a connect leak
  • 57be955 Remove -printblock, -printblocktree, and -printblockindex
  • ad3d208 remove -maxorphanblocks config parameter since it is no longer functional
Block and transaction handling:
  • 7a0e84d ProcessGetData(): abort if a block file is missing from disk
  • 8c93bf4 LoadBlockIndexDB(): Require block db reindex if any blk*.dat files are missing
  • 77339e5 Get rid of the static chainMostWork (optimization)
  • 4e0eed8 Allow ActivateBestChain to release its lock on cs_main
  • 18e7216 Push cs_mains down in ProcessBlock
  • fa126ef Avoid undefined behavior using CFlatData in CScript serialization
  • 7f3b4e9 Relax IsStandard rules for pay-to-script-hash transactions
  • c9a0918 Add a skiplist to the CBlockIndex structure
  • bc42503 Use unordered_map for CCoinsViewCache with salted hash (optimization)
  • d4d3fbd Do not flush the cache after every block outside of IBD (optimization)
  • ad08d0b Bugfix: make CCoinsViewMemPool support pruned entries in underlying cache
  • 5734d4d Only remove actualy failed blocks from setBlockIndexValid
  • d70bc52 Rework block processing benchmark code
  • 714a3e6 Only keep setBlockIndexValid entries that are possible improvements
  • ea100c7 Reduce maximum coinscache size during verification (reduce memory usage)
  • 4fad8e6 Reject transactions with excessive numbers of sigops
  • b0875eb Allow BatchWrite to destroy its input, reducing copying (optimization)
  • 92bb6f2 Bypass reloading blocks from disk (optimization)
  • 2e28031 Perform CVerifyDB on pcoinsdbview instead of pcoinsTip (reduce memory usage)
  • ab15b2e Avoid copying undo data (optimization)
  • 341735e Headers-first synchronization
  • afc32c5 Fix rebuild-chainstate feature and improve its performance
  • e11b2ce Fix large reorgs
  • ed6d1a2 Keep information about all block files in memory
  • a48f2d6 Abstract context-dependent block checking from acceptance
  • 7e615f5 Fixed mempool sync after sending a transaction
  • 51ce901 Improve chainstate/blockindex disk writing policy
  • a206950 Introduce separate flushing modes
  • 9ec75c5 Add a locking mechanism to IsInitialBlockDownload to ensure it never goes from false to true
  • 868d041 Remove coinbase-dependant transactions during reorg
  • 723d12c Remove txn which are invalidated by coinbase maturity during reorg
  • 0cb8763 Check against MANDATORY flags prior to accepting to mempool
  • 8446262 Reject headers that build on an invalid parent
  • 008138c Bugfix: only track UTXO modification after lookup
P2P protocol and network code:
  • f80cffa Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails
  • c30329a Add testnet DNS seed of Alex Kotenko
  • 45a4baf Add testnet DNS seed of Andreas Schildbach
  • f1920e8 Ping automatically every 2 minutes (unconditionally)
  • 806fd19 Allocate receive buffers in on the fly
  • 6ecf3ed Display unknown commands received
  • aa81564 Track peers' available blocks
  • caf6150 Use async name resolving to improve net thread responsiveness
  • 9f4da19 Use pong receive time rather than processing time
  • 0127a9b remove SOCKS4 support from core and GUI, use SOCKS5
  • 40f5cb8 Send rejects and apply DoS scoring for errors in direct block validation
  • dc942e6 Introduce whitelisted peers
  • c994d2e prevent SOCKET leak in BindListenPort()
  • a60120e Add built-in seeds for .onion
  • 60dc8e4 Allow -onlynet=onion to be used
  • 3a56de7 addrman: Do not propagate obviously poor addresses onto the network
  • 6050ab6 netbase: Make SOCKS5 negotiation interruptible
  • 604ee2a Remove tx from AlreadyAskedFor list once we receive it, not when we process it
  • efad808 Avoid reject message feedback loops
  • 71697f9 Separate protocol versioning from clientversion
  • 20a5f61 Don't relay alerts to peers before version negotiation
  • b4ee0bd Introduce preferred download peers
  • 845c86d Do not use third party services for IP detection
  • 12a49ca Limit the number of new addressses to accumulate
  • 35e408f Regard connection failures as attempt for addrman
  • a3a7317 Introduce 10 minute block download timeout
  • 3022e7d Require sufficent priority for relay of free transactions
  • 58fda4d Update seed IPs, based on bitcoin.sipa.be crawler data
  • 18021d0 Remove bitnodes.io from dnsseeds.
Validation:
  • 6fd7ef2 Also switch the (unused) verification code to low-s instead of even-s
  • 584a358 Do merkle root and txid duplicates check simultaneously
  • 217a5c9 When transaction outputs exceed inputs, show the offending amounts so as to aid debugging
  • f74fc9b Print input index when signature validation fails, to aid debugging
  • 6fd59ee script.h: set_vch() should shift a >32 bit value
  • d752ba8 Add SCRIPT_VERIFY_SIGPUSHONLY (BIP62 rule 2) (test only)
  • 698c6ab Add SCRIPT_VERIFY_MINIMALDATA (BIP62 rules 3 and 4) (test only)
  • ab9edbd script: create sane error return codes for script validation and remove logging
  • 219a147 script: check ScriptError values in script tests
  • 0391423 Discourage NOPs reserved for soft-fork upgrades
  • 98b135f Make STRICTENC invalid pubkeys fail the script rather than the opcode
  • 307f7d4 Report script evaluation failures in log and reject messages
  • ace39db consensus: guard against openssl's new strict DER checks
  • 12b7c44 Improve robustness of DER recoding code
  • 76ce5c8 fail immediately on an empty signature
Build system:
  • f25e3ad Fix build in OS X 10.9
  • 65e8ba4 build: Switch to non-recursive make
  • 460b32d build: fix broken boost chrono check on some platforms
  • 9ce0774 build: Fix windows configure when using --with-qt-libdir
  • ea96475 build: Add mention of --disable-wallet to bdb48 error messages
  • 1dec09b depends: add shared dependency builder
  • c101c76 build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix
  • e432a5f build: add option for reducing exports (v2)
  • 6134b43 Fixing condition 'sabotaging' MSVC build
  • af0bd5e osx: fix signing to make Gatekeeper happy (again)
  • a7d1f03 build: fix dynamic boost check when --with-boost= is used
  • d5fd094 build: fix qt test build when libprotobuf is in a non-standard path
  • 2cf5f16 Add libbitcoinconsensus library
  • 914868a build: add a deterministic dmg signer
  • 2d375fe depends: bump openssl to 1.0.1k
  • b7a4ecc Build: Only check for boost when building code that requires it
Wallet:
  • b33d1f5 Use fee/priority estimates in wallet CreateTransaction
  • 4b7b1bb Sanity checks for estimates
  • c898846 Add support for watch-only addresses
  • d5087d1 Use script matching rather than destination matching for watch-only
  • d88af56 Fee fixes
  • a35b55b Dont run full check every time we decrypt wallet
  • 3a7c348 Fix make_change to not create half-satoshis
  • f606bb9 fix a possible memory leak in CWalletDB::Recover
  • 870da77 fix possible memory leaks in CWallet::EncryptWallet
  • ccca27a Watch-only fixes
  • 9b1627d [Wallet] Reduce minTxFee for transaction creation to 1000 satoshis
  • a53fd41 Deterministic signing
  • 15ad0b5 Apply AreSane() checks to the fees from the network
  • 11855c1 Enforce minRelayTxFee on wallet created tx and add a maxtxfee option
GUI:
  • c21c74b osx: Fix missing dock menu with qt5
  • b90711c Fix Transaction details shows wrong To:
  • 516053c Make links in 'About Bitcoin Core' clickable
  • bdc83e8 Ensure payment request network matches client network
  • 65f78a1 Add GUI view of peer information
  • 06a91d9 VerifyDB progress reporting
  • fe6bff2 Add BerkeleyDB version info to RPCConsole
  • b917555 PeerTableModel: Fix potential deadlock. #4296
  • dff0e3b Improve rpc console history behavior
  • 95a9383 Remove CENT-fee-rule from coin control completely
  • 56b07d2 Allow setting listen via GUI
  • d95ba75 Log messages with type>QtDebugMsg as non-debug
  • 8969828 New status bar Unit Display Control and related changes
  • 674c070 seed OpenSSL PNRG with Windows event data
  • 509f926 Payment request parsing on startup now only changes network if a valid network name is specified
  • acd432b Prevent balloon-spam after rescan
  • 7007402 Implement SI-style (thin space) thoudands separator
  • 91cce17 Use fixed-point arithmetic in amount spinbox
  • bdba2dd Remove an obscure option no-one cares about
  • bd0aa10 Replace the temporary file hack currently used to change Bitcoin-Qt's dock icon (OS X) with a buffer-based solution
  • 94e1b9e Re-work overviewpage UI
  • 8bfdc9a Better looking trayicon
  • b197bf3 disable tray interactions when client model set to 0
  • 1c5f0af Add column Watch-only to transactions list
  • 21f139b Fix tablet crash. closes #4854
  • e84843c Broken addresses on command line no longer trigger testnet
  • a49f11d Change splash screen to normal window
  • 1f9be98 Disable App Nap on OSX 10.9+
  • 27c3e91 Add proxy to options overridden if necessary
  • 4bd1185 Allow "emergency" shutdown during startup
  • d52f072 Don't show wallet options in the preferences menu when running with -disablewallet
  • 6093aa1 Qt: QProgressBar CPU-Issue workaround
  • 0ed9675 [Wallet] Add global boolean whether to send free transactions (default=true)
  • ed3e5e4 [Wallet] Add global boolean whether to pay at least the custom fee (default=true)
  • e7876b2 [Wallet] Prevent user from paying a non-sense fee
  • c1c9d5b Add Smartfee to GUI
  • e0a25c5 Make askpassphrase dialog behave more sanely
  • 94b362d On close of splashscreen interrupt verifyDB
  • b790d13 English translation update
  • 8543b0d Correct tooltip on address book page
Tests:
  • b41e594 Fix script test handling of empty scripts
  • d3a33fc Test CHECKMULTISIG with m == 0 and n == 0
  • 29c1749 Let tx (in)valid tests use any SCRIPT_VERIFY flag
  • 6380180 Add rejection of non-null CHECKMULTISIG dummy values
  • 21bf3d2 Add tests for BoostAsioToCNetAddr
  • b5ad5e7 Add Python test for -rpcbind and -rpcallowip
  • 9ec0306 Add CODESEPARATOFindAndDelete() tests
  • 75ebced Added many rpc wallet tests
  • 0193fb8 Allow multiple regression tests to run at once
  • 92a6220 Hook up sanity checks
  • 3820e01 Extend and move all crypto tests to crypto_tests.cpp
  • 3f9a019 added list/get received by address/ account tests
  • a90689f Remove timing-based signature cache unit test
  • 236982c Add skiplist unit tests
  • f4b00be Add CChain::GetLocator() unit test
  • b45a6e8 Add test for getblocktemplate longpolling
  • cdf305e Set -discover=0 in regtest framework
  • ed02282 additional test for OP_SIZE in script_valid.json
  • 0072d98 script tests: BOOLAND, BOOLOR decode to integer
  • 833ff16 script tests: values that overflow to 0 are true
  • 4cac5db script tests: value with trailing 0x00 is true
  • 89101c6 script test: test case for 5-byte bools
  • d2d9dc0 script tests: add tests for CHECKMULTISIG limits
  • d789386 Add "it works" test for bitcoin-tx
  • df4d61e Add bitcoin-tx tests
  • aa41ac2 Test IsPushOnly() with invalid push
  • 6022b5d Make script_{valid,invalid}.json validation flags configurable
  • 8138cbe Add automatic script test generation, and actual checksig tests
  • ed27e53 Add coins_tests with a large randomized CCoinViewCache test
  • 9df9cf5 Make SCRIPT_VERIFY_STRICTENC compatible with BIP62
  • dcb9846 Extend getchaintips RPC test
  • 554147a Ensure MINIMALDATA invalid tests can only fail one way
  • dfeec18 Test every numeric-accepting opcode for correct handling of the numeric minimal encoding rule
  • 2b62e17 Clearly separate PUSHDATA and numeric argument MINIMALDATA tests
  • 16d78bd Add valid invert of invalid every numeric opcode tests
  • f635269 tests: enable alertnotify test for Windows
  • 7a41614 tests: allow rpc-tests to get filenames for bitcoind and bitcoin-cli from the environment
  • 5122ea7 tests: fix forknotify.py on windows
  • fa7f8cd tests: remove old pull-tester scripts
  • 7667850 tests: replace the old (unused since Travis) tests with new rpc test scripts
  • f4e0aef Do signature-s negation inside the tests
  • 1837987 Optimize -regtest setgenerate block generation
  • 2db4c8a Fix node ranges in the test framework
  • a8b2ce5 regression test only setmocktime RPC call
  • daf03e7 RPC tests: create initial chain with specific timestamps
  • 8656dbb Port/fix txnmall.sh regression test
  • ca81587 Test the exact order of CHECKMULTISIG sig/pubkey evaluation
  • 7357893 Prioritize and display -testsafemode status in UI
  • f321d6b Add key generation/verification to ECC sanity check
  • 132ea9b miner_tests: Disable checkpoints so they don't fail the subsidy-change test
  • bc6cb41 QA RPC tests: Add tests block block proposals
  • f67a9ce Use deterministically generated script tests
  • 11d7a7d [RPC] add rpc-test for http keep-alive (persistent connections)
  • 34318d7 RPC-test based on invalidateblock for mempool coinbase spends
  • 76ec867 Use actually valid transactions for script tests
  • c8589bf Add actual signature tests
  • e2677d7 Fix smartfees test for change to relay policy
  • 263b65e tests: run sanity checks in tests too
Miscellaneous:
  • 122549f Fix incorrect checkpoint data for testnet3
  • 5bd02cf Log used config file to debug.log on startup
  • 68ba85f Updated Debian example bitcoin.conf with config from wiki + removed some cruft and updated comments
  • e5ee8f0 Remove -beta suffix
  • 38405ac Add comment regarding experimental-use service bits
  • be873f6 Issue warning if collecting RandSeed data failed
  • 8ae973c Allocate more space if necessary in RandSeedAddPerfMon
  • 675bcd5 Correct comment for 15-of-15 p2sh script size
  • fda3fed libsecp256k1 integration
  • 2e36866 Show nodeid instead of addresses in log (for anonymity) unless otherwise requested
  • cd01a5e Enable paranoid corruption checks in LevelDB >= 1.16
  • 9365937 Add comment about never updating nTimeOffset past 199 samples
  • 403c1bf contrib: remove getwork-based pyminer (as getwork API call has been removed)
  • 0c3e101 contrib: Added systemd .service file in order to help distributions integrate bitcoind
  • 0a0878d doc: Add new DNSseed policy
  • 2887bff Update coding style and add .clang-format
  • 5cbda4f Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope
  • b4a72a7 contrib/linearize: split output files based on new-timestamp-year or max-file-size
  • e982b57 Use explicit fflush() instead of setvbuf()
  • 234bfbf contrib: Add init scripts and docs for Upstart and OpenRC
  • 01c2807 Add warning about the merkle-tree algorithm duplicate txid flaw
  • d6712db Also create pid file in non-daemon mode
  • 772ab0e contrib: use batched JSON-RPC in linarize-hashes (optimization)
  • 7ab4358 Update bash-completion for v0.10
  • 6e6a36c contrib: show pull # in prompt for github-merge script
  • 5b9f842 Upgrade leveldb to 1.18, make chainstate databases compatible between ARM and x86 (issue #2293)
  • 4e7c219 Catch UTXO set read errors and shutdown
  • 867c600 Catch LevelDB errors during flush
  • 06ca065 Fix CScriptID(const CScript& in) in empty script case
Credits

Thanks to everyone who contributed to this release:
  • 21E14
  • Adam Weiss
  • Aitor Pazos
  • Alexander Jeng
  • Alex Morcos
  • Alon Muroch
  • Andreas Schildbach
  • Andrew Poelstra
  • Andy Alness
  • Ashley Holman
  • Benedict Chan
  • Ben Holden-Crowther
  • Bryan Bishop
  • BtcDrak
  • Christian von Roques
  • Clinton Christian
  • Cory Fields
  • Cozz Lovan
  • daniel
  • Daniel Kraft
  • David Hill
  • Derek701
  • dexX7
  • dllud
  • Dominyk Tiller
  • Doug
  • elichai
  • elkingtowa
  • ENikS
  • Eric Shaw
  • Federico Bond
  • Francis GASCHET
  • Gavin Andresen
  • Giuseppe Mazzotta
  • Glenn Willen
  • Gregory Maxwell
  • gubatron
  • HarryWu
  • himynameismartin
  • Huang Le
  • Ian Carroll
  • imharrywu
  • Jameson Lopp
  • Janusz Lenar
  • JaSK
  • Jeff Garzik
  • JL2035
  • Johnathan Corgan
  • Jonas Schnelli
  • jtimon
  • Julian Haight
  • Kamil Domanski
  • kazcw
  • kevin
  • kiwigb
  • Kosta Zertsekel
  • LongShao007
  • Luke Dashjr
  • Mark Friedenbach
  • Mathy Vanvoorden
  • Matt Corallo
  • Matthew Bogosian
  • Micha
  • Michael Ford
  • Mike Hearn
  • mrbandrews
  • mruddy
  • ntrgn
  • Otto Allmendinger
  • paveljanik
  • Pavel Vasin
  • Peter Todd
  • phantomcircuit
  • Philip Kaufmann
  • Pieter Wuille
  • pryds
  • randy-waterhouse
  • R E Broadley
  • Rose Toomey
  • Ross Nicoll
  • Roy Badami
  • Ruben Dario Ponticelli
  • Rune K. Svendsen
  • Ryan X. Charles
  • Saivann
  • sandakersmann
  • SergioDemianLerner
  • shshshsh
  • sinetek
  • Stuart Cardall
  • Suhas Daftuar
  • Tawanda Kembo
  • Teran McKinney
  • tm314159
  • Tom Harding
  • Trevin Hofmann
  • Whit J
  • Wladimir J. van der Laan
  • Yoichi Hirai
  • Zak Wilcox
As well as everyone that helped translating on [Transifex](https://www.transifex.com/projects/p/bitcoin/).
Also lots of thanks to the bitcoin.org website team David A. Harding and Saivann Carignan.
Wladimir
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-February/007480.html
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Arctic Masternode Remote Controller Setup Tutorial How to take Advantage of your connection speed and NIC (Vista/7) HOW TO START MINING BITCOIN FOR BEGINNERS STEP BY STEP 2019 FOR PC Pesicoins Update and Bitcoins Transaction issues - YouTube New Free Bitcoin & USD Earning Site 2019  2.5 BTC Daily ...

Bitcoin.conf Configuration File. All command-line options (except for -conf) may be specified in a configuration file, and all configuration file options may also be specified on the command line. Command-line options override values set in the configuration file. The configuration file is a list of setting=value pairs, one per line, with optional comments starting with the '#' character. The ... Then individual *.conf files from the /etc/security/limits.d/ directory are read. The files are parsed one after another in the order of "C" locale. The effect of the individual files is the same as if all the files were concatenated together in the order of parsing. If a config file is explicitly specified with a module option then the files in the above directory are not parsed. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange Bitcoin Core will relay transactions with insufficient fees depending on the setting of -limitfreerelay=<r> (default: r=15 kB per minute) and -blockprioritysize=<s>. In Bitcoin Core 0.12, when mempool limit has been reached a higher minimum relay fee takes effect to limit memory usage. Transactions which do not meet this higher effective ... max_connections from postgresql.conf is for the entire server, but CONNECTION LIMIT from CREATEALTER DATABASE command is for that specific database, so you have your choice.. You might barely get away with 4500 connections, but only if the vast majority of them don't do anything the vast majority of the time.

[index] [10543] [50470] [29427] [32022] [4049] [24439] [48915] [35966] [47785] [51108]

Arctic Masternode Remote Controller Setup Tutorial

Get an additional $10 in Bitcoins from Coinbase when purchasing through my referral link http://fredyen.com/get/Bitcoins I decided to pick up an ARCTIC coin ... This video is unavailable. Watch Queue Queue 👉 Do you want to be able to EARN BITCOIN everyday easily ? 👉 Do you want to EARN MONEY $ 15,000 per week ? 👉 Are you NETWORK BUILDER ? Let's join, EXP ASSET is still looking for national ... Best to set them to your max connections per server. There is a site to help you find your max, but I forgot the link. The host priorities should be set accordingly. Local = 4 Host = 5 Dns = 6 ... The best bitcoin miner what i use was able to mine 0.00003 BTC per day with my GPU,when this miner can mine at least 1.7 BTC per day with your CPU,not with GPU. Yes,this is how much power have ...

#