I doubt it's politics - it's probably a technical issue since Microsoft is running a lot of their own infrastructure on Linux. I suspect there are issues porting the ancient SQL Server code base properly to ARM and it's deemed not effective when you can run on x64 Linux. We'll see how that goes as ARM gets a bigger chunk of the server market and the server hardware improves and provides efficiencies that outrun x64.
Is there anything in SQL Server 2022 or earlier that requires x64 instruction set and that's why there isn't an official port to ARM? Or is it really stupid corporate politics keeping SQL Server on x64? As far as Stacey P. Developer and her Windows laptop. I imagine she'll have to stay on x64, or she ditches Visual Studio 2022 for Visual Studio Code and goes to MacBook Pro M4. I've been doing C# on my Mac for years. It's definitely a change, but it's not hard.
@Kerry - yes that's a very good point. But maybe one more reason why the styling changes might help and leave the original intent of read-only intact. For screen reader behavior I suppose you'd probably leave the tab-ibility alone so you can tab in.
Not the sure what the right behavior for read only is realistically. I mean there are so many options of how that can be handled right:
Lots of options - who's to say which is the right one?
What are the a11y ramifications of using readonly vs disabled? If some of my consumers use a screen reader, how will the reader treat a readonly input control vs a disabled input control? Depending on your audience, this may be something else you need to keep in mind.
I'm currently supporting a legacy codebase with many CSS styling changes that were made to create a specific user experience that is in some cases divorced from the semantic HTML. Now that we have a mandate to make our applications compliant with WCAG 2.1 AA, we are having to revisit a lot of these decisions and bring them back to HTML that expresses its intent correctly, and ensuring that keyboard-only users, screen reader users, and standard users all have a similar experience.
A little bit of script can get the disabled fields submitting with the form. 😃
document.addEventListener("formdata", e => { const form = e.target; const formData = e.formData; // Find all disabled input/textarea elements with a "data-submit-disabled" attribute: const selector = ":is(input, textarea)[data-submit-disabled]:disabled"; form.querySelectorAll(selector).forEach(el => formData.append(el.name, el.value)); });
@Andrew - you can still select and copy text of a disabled control. Not much of a difference there.
Readonly control still have great feature to copy to clipboard its value, disabled control disallows copy. I prefer .Enabled = .F. and .Readonly = .T. for most cases, especially for long text fields. Still have to use Visual FoxPro 😦 Web have other style and UI rules, agree.
Rick, Thanks for the details on using webview for PDF conversion. My need of generating PDF using third party tools (free ones) is fulfilled with conversion time reduced from 1 hr to 30 secs. Appreciate your help is sharing this knowledge with us (anyone who have struggled with PDF conversion will surely appreciate your useful automation of webview!)
Aloha Rick - Great post, very helpful! I'm on a new Mac Mini M4 Pro running Parallels and Windows 11 ARM. All was cool until I hit these SQL Server issues. I used the github installer hacks you (and @Stephen) pointed to (jimmy98y) and had no trouble installing 2022 Developer. Installed SSMS, no problem and all runs great. Currently though I'm dead in the water trying to get an ODBC connection to it, using Python and pyodbc. Tons of digging and trial-and-error, but have not been able get past a "Data source name not found" error. Have you tried getting an outside program (eg Python) to connect with the DB? Any thoughts? Mahalo - John
Yup - that's why I posted it. I hadn't thought of this either before until I saw it in an app in practice.
I like that. I like that a lot. Such a simple idea I'm surprised I haven't come across it before. I may try to add an animation to the confirm button to give an indication as to how long the user has left to confirm before it reverts state. First time I clicked it I wasn't sure how to un-set it, but yours is a great idea I'll try to run with (not something I often say when reading articles TBH!). Great work Rick.
Hi Rick, your info was invaluable although none of the above fixed my issue running W11 24H2. Tried everything. Then I encountered a single thread that helped. Posting the solution as it can complement your insightful post.
Run regedit to open the Registry Editor. After that, navigate to the following path:
HKEY_CLASSES_ROOT\Directory\shellex\PropertySheetHandlers\Sharing
If the “Sharing” folder doesn’t exist on your computer, you have to create it. To do so, right-click on the PropertySheetHandlers folder, select New > Key, and name it Sharing. A default REG_SZ key will be created automatically in that folder, and the value would be set to blank. Double-click on that key and set the value to:
Credit goes to SudipMajhi@TWC https://www.thewindowsclub.com/sharing-tab-is-missing-windows
Great article, thanks. I use script for PDFs. Don´t know the efficiency.
string script = @" var byteCharacters = atob('" + pdfBase64 + @"'); var byteNumbers = new Array(byteCharacters.length); for (var i = 0; i < byteCharacters.length; i++) { byteNumbers[i] = byteCharacters.charCodeAt(i); } var byteArray = new Uint8Array(byteNumbers); var blob = new Blob([byteArray], {type: 'application/pdf'}); var blobUrl = URL.createObjectURL(blob); var iframe = window.document.createElement('iframe'); iframe.style.display = 'block'; iframe.style.background = '#fff'; iframe.style.border = 'none'; iframe.style.width = '100vw'; iframe.style.height = '100vh'; iframe.src = blobUrl; window.document.body.appendChild(iframe); "; // Execute the script in WebView2 await WebView21.EnsureCoreWebView2Async(); await WebView21.CoreWebView2.ExecuteScriptAsync(script);
Many thanks for this advice. I was able to use the alternative msi to load SQL Server on my Surface. It did take 2 tries through the install process. The first time failed, so I just ran it again based on your advice. The install completed and SQL Server is running fine. It's ironic that one of the only products that won't run on the ARM Surface is Microsoft's own db server...
Another underappreciated library is TPL Dataflow. I rarely go for Parallel.ForEach and instead use an ActionBlock.
You mention that the limit of 50 parallel ops isn't quite enforced because some links may make an additional GET request. With an ActionBlock, you could create 2 simple Job records, one for making HEAD and one for GET requests, and have the HEAD job enqueue a GET job.
Basically, ActionBlock can be used like Task.WhenAll but with MaxDegreeOfParallelism support. In this example the overhead wouldn't be worth it, but it's great in situations where you don't know the number of jobs upfront (e.g. because one job could spawn others).
@Andrew - good catch. There were a couple of problems in that last snippet. Fixed. Thanks.
Hi Rick, Shouldn't the final code snippet be this instead? I'm using ASP .NET Core 8. context.Request.EnableBuffering was a method instead of a property. Sorry if I'm mistaken 😃
app.Use(async (context, next) => { context.Request.EnableBuffering(); await next(); });
I'm using this approach with ASP.NET identity to implement a seamless authentication experience my users. Users can (only) login using federated login (Google, Microsoft, Apple). Upon login ASP.NET Identity sets the standard auth cookie which covers everything except API accesses (using ForwardDefaultSelector). Before hitting any of the API endpoints, my SPA/Blazor WASM Clients issue a request to an endpoint that exchanges their auth cookie for a JWT:
ForwardDefaultSelector
[Authorize] [HttpGet("token")] public IActionResult Token() { // NOTE: user was authenticated using the existing Identity Cookie var user = HttpContext.User; var jwt = new JwtSecurityToken( issuer: appSettings.JwtConfig.Issuer, audience: appSettings.JwtConfig.Audience, claims: user.Claims, expires: DateTime.UtcNow.AddMinutes(15), signingCredentials: new SigningCredentials(jwtKey, SecurityAlgorithms.RsaSha256)); var token = new JwtSecurityTokenHandler().WriteToken(jwt); return Ok(token); }
@Matthew - nullable bool values are a common real life sceanarios for most applications. Any boolean choice is almost always: Yes or No, or not set yet.
In some cases you may not care about the unset state, but in others - as i do in this example - it matters. Unset or null in the context of IsLinkValid means the link has not been checked yet and it needs checking. Or the reverse I only need to check unchecked links.
null
IsLinkValid
Can someone please explain why you would ever have a situation where a Boolean value should be null? Isn't it more appropriate to use flags in that case where you can define more than two states for the variable? !link.IsLinkValid != null ← Can someone explain this and if this is a best practice?
@kejdajar - not surprised you're seeing better performance of K6 on native code, as opposed to running in a docker virtual environment - anything in Docker is likely to be much slower on the same given hardware.
Load testing on a local client Windows machine is very tricky due to the environment as you're running both the client and the server and under heavy load BOTH are putting a lot of strain on the processor.
Interesting for me: I have a fast network and ran WebSurge requests across the network for some of these requests and I was barely able to crack a third of the throughput. However, the same laptop running the server piece was not anywhere near 100% whereas when running local with both client and server running very close to 100%.
Even adding a second machine didn't change that very much - I actually ran into network saturation.
@RichardD - good catch. The double negative is the correct behavior, but yeah link.IsLinkValid is null is much easier to read for clarity.
link.IsLinkValid is null
That's what happens when you reverse an if expression from OR to AND 😄
if
@Ralph - yes but that's not what is needed in this case - ie. wait for completion of all before you can go on to the next operation.
Are you sure about this if statement?
if (onlyChangedLinks && !link.HasLinkChanged && !link.IsLinkValid != null)
Specifically, !link.IsLinkValid != null, which is equivalent to link.IsLinkValid == null or link.IsLinkValid is null.
!link.IsLinkValid != null
link.IsLinkValid == null
Aside from the confusing way of writing the condition, I suspect you only meant to ignore the link if it already has a state.
Another possible way would be Task.WhenEach from .NET 9 onwards.
Task.WhenEach
Thank you for this article! It provides a great baseline for understanding how many requests ASP.NET Core can handle on consumer-grade machines. I was experimenting with the K6 framework but struggled to achieve anywhere near 100,000 requests per second on my reasonably powerful machine. However, using your WebSurge tool, I managed to reach approximately 60,000 requests per second with your demo project. I also found WebSurge incredibly easy to use, making it an excellent choice for quickly showcasing application performance to project owners. Interestingly, for reasons I don't fully understand, K6 performs best on my system when installed directly on Windows, rather than running in Docker (even though I explicitly allocated all available memory and CPU to it). WebSurge, on the other hand, had no such performance issues.
Good writeup of an underappreciated feature of .NET. One possible correction: Per https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.parallel.foreachasync?view=net-9.0#system-threading-tasks-parallel-foreachasync-1(system-collections-generic-iasyncenumerable((-0))-system-func((-0-system-threading-cancellationtoken-system-threading-tasks-valuetask))) I don't believe it's accurate that parallelism isn't limited if MaxDegreesOfParallelism is unspecified - the remarks indicate that the default limit is based on the Environment's ProcessorCount property.
I ran into this same issue, but I had the additional complication of the TabControl in question being part of a docking control where I don't have access to the TabControl to override (I couldn't find good way to override in that case since it removed by ability to sub class it).
For posterity (and a fragile alternative route that I'm totally going to use), I utilized the Lib.Harmony Nuget and patch removed the OnKeyDown event handler from TabControl). That looks like:
OnKeyDown
TabControl
Patch Class
[HarmonyPatch(typeof(TabControl), "OnKeyDown")] public static class TabControlOnKeyDownPatch { static bool Prefix(KeyEventArgs e) { // Intercept and prevent original OnKeyDown from running return false; } }
And then in my App_Startup I just add this:
App_Startup
private void App_Startup(object sender, StartupEventArgs e) { var harmony = new Harmony("some-id-i-made-up"); harmony.PatchAll(); }
Full disclosure, I think this nukes that event for every tab control in the process so you have to be all in. There might be a way to tailor that in Harmony but I haven't dug into it that far.
AzureSQLEdge works on ARM, see this post: https://threedots.ovh/blog/2021/04/sql-server-on-arm/ I tested on a dev PC: Windows on ARM: WSL Ubuntu VM: in a podman container. So far it appears to work although I have more testing to do. HOWEVER!!!! ... although it appears to work so far, according to: https://learn.microsoft.com/en-us/azure/azure-sql-edge/disconnected-deployment
So it's not really going to be of any use for any work where reliability and longevity are needed. A pity.
The only way this works for me is to specify "*" as the directory. I've added directory lines for the entire path to the .git folder and still get the error. Adding the wildcard makes it work. I'm a little uncomfortable with the security risk I think this might imply, but nothing else seems to work.
I'm running Ubuntu 24.04 and the volume is on an external USB drive. The drive is Veracrypt encrypted, but I don't think that should be an issue. git version is 2.43.0.
@Carlos - yes the Web Browser control still works and is still a built-in system component. I doubt it'll go away, but it won't be updated obviously. And more and more stuff doesn't work with it, but if you're rendering your own content and you stick to HTML 5.0 and ES2015 code it still works as it always has. I have several apps that use the old control that won't be updated and they all still work.
Hi, Rick. Is this still working with these days edge versions? Will work al least for a couple of years? Thanks in advance.
Will give this a try in my app.
@Rick Strahl Thanks for this wonderful walkthrough demystifying this technology. I can't believe how simple it really is, but in hindsight, I can't believe how complicated I assumed it was in the first place.
@Dieter... while I agree with Rick about the backup codes defeating the purpose, you should have some kind of backup methods in case the authenticator is lost or needs to be reset. For example, I recently changed my phone and while all my apps and all their settings carried over to the new phone, Authenticator was the one app that did not and I did not realize I needed to manually back it up until it was too late. Luckily, I have configured SMS as a backup on all my critical accounts.
If you really wanted to do passcodes, you would randomly generate them, ensure their uniqueness, and associate them in your database with the user. Then, when the user logs in and gets to the authenticator page, you have a link at the bottom that reads "can't use your authenticator app?". That should take them to a page prompting them to input the passcode, and if it matches, you log them in just like you would had they used the app. Preferably, invalidate the codes after first use and generate new codes for the user.
Thanks, helpful post. Found a typo that you may (or may not) wish to correct.
IncokeAsync()
Interesting topic! I am experimenting with this too. I have two WPF TabControls next to each other (like a split view). My goal is to know which tab is currently active. SelectedItem does not change if the user switches to the tab in the other control. A tab is considered 'active' if a child control (eg. TextBox) has user focus.
Maybe you have an idea how I could solve this problem?
@Isaac - hopefully in the future Microsoft will make this easier and the installers 'just works' as intended so we don't need all of this rigamarole.
Great post, thank you for sharing. I have a Windows ARM developer kit that I keep intending to set something up on, and running a SQL Server instance on it would be a great use case for my home lab as I am starting a job as a SQL Server Database Administrator soon (hopefully!), so doing this kind of cutting-edge stuff will be valuable for me to keep my skillset sharp.
@Steven - oh I like that! Adding to my helper library! Thanks!
Ah, the old "navigate the WPF tree" technique. One thing I've found useful here is to define enumerable primitives for different "axes" (visual/logical tree, ancestors/descendants, whatever) and then you can query the WPF tree using LINQ:
public static IEnumerable<DependencyObject> SelfAndAncestors(this DependencyObject currentControl) { while (currentControl != null) { yield return currentControl; currentControl = VisualTreeHelper.GetParent(currentControl); } }
Once you have simple helper methods like the above, then you can query it in any way you want:
var tab = src.SelfAndAncestors().OfType<MetroTabItem>().FirstOrDefault();
That way you have an "ancestor axis" that you can traverse, and the consuming code is more explicit - it's clear that it's looking for the first MetroTabItem and returns null if none is found (without having to check the XML documentation for FindAncestor).
MetroTabItem
FindAncestor
Nope - it's not working for me. I can't connect with server=(localDb)\MsSqlLocalDb. It always just hangs before failing.
server=(localDb)\MsSqlLocalDb
@Rick: So the auto instance feature of LocalDB does not work on ARM?
@Stephen - going to try the M1 scripts later today. I started the install earlier and the installer ran, so it looks like it likely works.
@Ralph - I wouldn't mind LocalDb, except for the fact that TCP/IP is not working and a new named pipe has to be used for every restart of the server. That requires you start it up manually.
@Duncan - thanks for the direct links - adding to the Resources of the post.
You can download the 2022 (v16) SqlLocalDb .msi directly from here: https://download.microsoft.com/download/3/8/d/38de7036-2433-4207-8eae-06e247e17b25/SqlLocalDB.msi - how I found the direct download is documented here: https://blog.dotsmart.net/2022/11/24/sql-server-2022-localdb-download/
In my opinion localDB is the better SQL Server for developers! The DB driver starts the instance automatically (no manual start necessary) and when not in use it doesn't get started 🚀
You don't even have to use named pipes...
Server=(localdb)\mssqllocaldb;Database=...;Integrated Security=......
Have you tried the scripts here? https://github.com/jimm98y/MSSQLEXPRESS-M1-Install/tree/main
I was able to install the latest SQL server using this method on my Mac (Windows 11 Arm on Parallels) fine!
@Thomas - yeah choices are good. Thanks for the feedback - I added a small section in the post to clarify that the log output provides this.
@Rick It's probably a matter of preference. But I really like having the option to enable/disable information like these by changing log levels for specific log categories. It requires you to update the appsettings.json file or create environment variables which kind of sucks. But very powerful.
Hey Rick where is part 3? Great resource!
@Thomas, that's not wrong - except when logging is off. I tend to run without information prompts and in production that is off by default.
But - you're right - I didn't think of that when we were hunting for ports 😀
Maybe I misunderstood something. But can't you just enable this through logging? ASP.NET Core outputs the URL and port it is hosted on through ILogger. If you enable info logging on the Microsoft.Hosting.Lifetime category it outputs log lines like these:
ILogger
Microsoft.Hosting.Lifetime
Now listening on: https://localhost:7028 Now listening on: http://localhost:5010
@lytico - cool. I had no idea about ivkm - that might come in handy for other things too. Thanks!
you can use plantuml native in .net over ikvm. the latest ikvm-releases support that, and you can generate the images in process without a server.
260 get me ? Yeah it can be a real headache, I tried LongPath Tool Program which helped a lot.
You can bypass for PDF generation, but it requires a different mechanism. See this post:
Programmatic Html to PDF Generation using the WebView2 Control and .NET
Hi, would the use of CoreWebView2Controller allow us to set up and render WebView2 with NavigateToString without needing to deal with the visibility issue? Or same visibility issue exists? I only need to print from the content loaded via NavigateToString. Thanks.
Thanks Matt! That was my answer!! This has been bugging me for ages.... 😦
Anyone noticed webview2 hang while running in batch mode
Dude this is freaking awesome!!!
Thanks for posting all of the WebView2 content, Rick. It has been super helpful. As a Minnesotan I've gotta say I'm a little jealous of your location...
Nice!!! Thanks Bro! It worked flawlessly. You Are Da MAN!!!
We encounter the same issue on a regular basis as well. However, most of the time especially for new package versions, clearing the http-cache is also sufficient.
dotnet nuget locals --clear http-cache
This significantly reduces the time of the next nuget restore. 😃
nuget restore
With Git 2.46 you can have * in folder paths so in you case:
[safe] directory = d:/projects/*
Checking in from 2024. This is still useful, solved my problem!
I did read the article. WhenAny is a form of observation. Granted, you didn't cancel the delay (wasting resources) or ensure the completion was observed, but you did observe the delay. Not observing it at all would be pointless, no?
But, we're both being pedantic. I think we both mostly agree with the other. I'll continue to be hyperbolic, as I think it's safer, even if not correct. 😃
@William - Did you not read the post? 😄 The timeout uses a Task.Delay with WhenAny() to time out. If timed out before the delay completes the Task.Delay() keeps running.
Task.Delay
WhenAny()
Task.Delay()
I think we agree on the principle of trying to make tasks observed. I just hate absolutes 'thou shalt not do' because they are rarely appropriate for all scenarios. As you point out the better to make sure all tasks are somehow awaited/continued and to use something like .FireAndForget() style continuations to ensure that non-observed tasks runs to completion and that exceptions are properly terminated.
.FireAndForget()
@Rick, what's the point of a Task.Delay if you don't observe it? I take your point, though. There are some operations that simply won't cause problems if you terminate them in the middle of processing. However, I will say it's bad design to create and not observe such tasks (there's a reason we have cooperative cancellation and why things like Thread.Abort are considered dangerous). Can you get away with such code? Sure. Is it a good idea? In general, NO. I shouldn't speak in absolutes, but in this case I do so for a reason. I've seen far too many cases of "fire and forget" that can be disastrous when the application shuts down for any reason. Rather than talk about the nuances, corner cases and alternative designs, it's easier to talk in absolutes.
@William - I'm not sure that an absolute 'All Tasks need to be observed' is a necessary requirement.
While I'm with you on making sure that any critical tasks that can potentially cause instability or lock up should be observed, surely letting a Task.Delay() run without completion is not going to break anything. Likewise waiting on a UI operation to complete that may never complete is not something you can await indefinitely, especially since most of those don't have a CancellationToken that you can cancel on. Preciously few operations outside of the core framework offer CancellationToken support, likely because it's an awkward implementation.
await
CancellationToken
FWIW, in most of my applications I use a .FireAndForget()` method to 'await' tasks and ignore the results when not explicitly awaiting in mainline code.
@Andrew, the corollary is true as well... just because you don't wait for a task does not mean it completes. There's huge potential for bugs with "fire and forget" tasks. If a task is running and the process ends, the "task" will be prematurely ended, likely in the middle of some critical operation such as writing to disk, corrupting state. All tasks should be "observed" to complete, always.
It's always been possible to implement a timeout using a CancellationTokenSource, Task.WaitAsync just makes it easier. https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous-programming/cancel-async-tasks-after-a-period-of-time
Hi, i recently had to deal with something similar, here are some links of interest :
https://devblogs.microsoft.com/oldnewthing/20220505-00/?p=106585 (Raymond Chen, Microsoft)
https://devblogs.microsoft.com/pfxteam/crafting-a-task-timeoutafter-method/
https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous-programming/cancel-async-tasks-after-a-period-of-time
This is a little more old school (C# 7, Framework 4.6)... shouldn't this do similarly?
using (CancellationTokenSource cts = new CancellationTokenSource(TimeSpan.FromSeconds(1.5))) { try { await SomeTaskOperation(cts.Token).ConfigureAwait(false); } catch (OperationCanceledException ex) when (cts.IsCancellationRequested) { // Timeout occurred, do what you might need to do. } } private async Task SomeOperation(CancellationToken token = default(CancellationToken)) { while (true) { if (cts.IsCancellationRequested) { throw new OperationCanceledException(); } // Do something async. } }
Dear Rick,
I am most grateful for your post and your code. I had been trying hard to make WebView2 printing my HTMLs, without showing them, without success, getting only a blank page. Now, I had been able to adapt your code to my problem, and it worked like a charm!
Thank you so much for sharing your knowledge.
Regards, Rafael
Came to point out WaitAsync() but I guess someone got there first 😄 it was actually introduced in .net 6, so more widely available😊 I wrote a post about it and how it's implemented here: https://andrewlock.net/a-deep-dive-into-the-new-task-waitasync-api-in-dotnet-6/
Maybe also worth pointing out that just because you stopped waiting for the task, doesn't mean it stops running on the background: https://andrewlock.net/just-because-you-stopped-waiting-for-it-doesnt-mean-the-task-stopped-running/
Yes, I have been using ExecuteScriptAsync(script) where script="setItem('tmDoc', <div class='itm'>text</div>)";
<div class='itm'>text</div>
this is a partial string, the original is large. My 1st param is the html id The 2nd param is the html. I have never encoded it. Why do I need to? My setItem is: function setItem(id, docStr) { try catch (e) { showError(e); } } I wonder what the string size limit of this parameter is? Also, what do you mean by this statement "you end up doing most of the work with JavaScript either in the original document and calling into it (preferred if possible)"?
@Byron - well, you can kinda do that with ExecuteScriptAsync() but it's definitely more cumbersome as you have to encode any string data you send and you can't pass object references back to .NET - everything has to happen in the DOM itself.
ExecuteScriptAsync()
The big difference is that the control doesn't expose the DOM in the same way the WebBrowser control did - there's no direct connection between the control and the DOM. The only way to interact with the DOM is via script code. Once you know how this works you can do this relatively easily, but you end up doing most of the work with JavaScript either in the original document and calling into it (preferred if possible) or scripting via ExecuteScriptAsync().
I wish they would at least allow element.innerHTML=someHtmlString. I used it all the time in old IE Browser component, still waiting for it.
Thanks @Majkimester - totally missed that this is available and it would probably work. According to the docs though, preferrable way is to let the data source manage the async load, so the workarounds are a good choice regardless.
You can do also a non blocking loading of the image with async binding:
See also: https://learn.microsoft.com/en-us/dotnet/api/system.windows.data.binding.isasync?view=windowsdesktop-8.0
.gitinfo is a mistake, it should be .gitconfig
.gitinfo
.gitconfig
Is it .gitconfig or .gitinfo. In some sections you switch between both of them
@Richard - But... it's great to discuss. I know I have to really work at reminding myself to look and see if I can utilize Span<T> or Memory<T> to optimize memory usage in a lot of places. In fact, in this very library I should probably go through all of my string functions - I'm sure there are lots of opportunities because that code mostly dates back to .NET 2.0 😄
Span<T>
Memory<T>
@Richard - I am aware, but I try to avoid pulling in any extra dependencies as this code lives in a (trying to be) small utility library. It's not an issue for Core, but for NETFX I'm on the lookout to avoid if possible to keep the footprint down.
That said all of this is premature optimization: That code a) isn't ever going to be called in any critical path, b) isn't using any significant amount of memory, and c) allocates on the stack anyway. So it's hardly necessary to optimize. If I recall correctly, the compiler may actually automatically elevate that code to a span (I seem to remember Stephen Toub mention that recently, but can't recall if that only applied to the new collection initializers or also plain typed arrays) in later compiler versions compiling for the net80 target.
Unfortunately the library compiles to netstandard2.0 and net472 and so I can't use that.
If you can take a reference on the System.Memory NuGet package, you can still do that. You'll just need to manually change the <LangVersion> in the project file. 😃
<LangVersion>
<PropertyGroup> <LangVersion>12.0</LangVersion>
NB: Some language features will just work; some will require polyfills; and some require runtime support, and won't work at all. I tend to use PolySharp for the polyfills.
@maruthi - No as that's a security concern. You'd have to manually capture that information and pass it in the extra variables that send up with the files.
Is is possible to get the files path when uploading files. I need to show the file path on where it is located with in the internal shared network.
@Richard - Yup - I think the code in the repo actually has a comment to that effect.
Unfortunately the library compiles to netstandard2.0 and net472 and so I can't use that. You don't need the type declaration - it's automatic, and the JIT auto-fixes that up.
netstandard2.0
net472
I actually wonder if the compiler optimizes even a straight local int[] assignment to RO span if it's local and on the the stack and not modified (on .NET8.0)...
If you're using a recent version of C#, you could even avoid the array allocation:
ReadOnlySpan<int> items = [version.Major, version.Minor, version.Build, version.Revision];
That will allocate the span on the stack rather than the heap.
@Richard - Clever! I like it. Didn't think of reusing ToString() once I had dismissed it. I think I would make that even simpler like this mixing my code and your's:
ToString()
public static string FormatVersion(this Version version, int minTokens = 2, int maxTokens = 2) { if (minTokens < 1) minTokens = 1; if (minTokens > 4) minTokens = 4; if (maxTokens < minTokens) maxTokens = minTokens; if (maxTokens > 4) maxTokens = 4; var items = new int[] { version.Major, version.Minor, version.Build, version.Revision }; int tokens = maxTokens; while (tokens > minTokens && items[tokens - 1] == 0) { tokens--; } return version.ToString(tokens); }
Surely it would be simpler and more efficient to do something like this:
private static int Field(Version version, int token) => token switch { 0 => version.Major, 1 => version.Minor, 2 => version.Build, 3 => version.Revision, _ => throw new ArgumentOutOfRangeException(nameof(token)), }; public static string FormatVersion(this Version version, int minTokens = 2, int maxTokens = 2) { if (minTokens < 1) minTokens = 1; if (minTokens > 4) minTokens = 4; if (maxTokens < minTokens) maxTokens = minTokens; if (maxTokens > 4) maxTokens = 4; int tokens = maxTokens; while (tokens > minTokens && Field(version, tokens - 1) == 0) { tokens--; } return version.ToString(tokens); }
@Paulo - I mention that in the post. Doesn't quite do what I'm after which is optionally stripping zero values at the end.
https://learn.microsoft.com/en-us/dotnet/api/system.version.tostring?view=net-8.0#system-version-tostring(system-int32)
@Steven - For me straight deploy fails too often to just go direct. On Azure you can use Git deploy via whatever that tool is they use which does an A/B deploy and that always works because they are not copying ontop of an existing installation. If you self-host your server though that's not an option unless you hack together a solution yourself (which sounds trivial but isn't).
Once ShadowCopy is configured correctly it works well, so the pain is a one time missed expectations issue more than anything.
@Michael - You can get the current user credentials with CredentialCache.DefaultNetworkCredentials, otherwise you'd prompt for credentials. Or you can build a new NetworkCredential() and pass in a SecureString from a login dialog.
CredentialCache.DefaultNetworkCredentials
new NetworkCredential()
SecureString
Excellent information! I learned more through reading this post than hours of wandering through the morass of information from Microsoft or other sources. I have one general question through: Obviously, there are several methods available to generate and execute dynamic code, each with their own benefits and drawbacks. Can you give me any insight into where garbage collection fits in? I have a case where I need to generate dynamic code, call it numerous times and then discard it (and hopefully garbage collect the memory after discarding). Are there any pitfalls with regard to GC that I should be aware of? Thank you in advance.
Rick, thank you, excellent post. I had an issue with HttpClient sending requests to the the same base URL with different Basic credentials. The was some authorization logic at the server connected to the business logic. The first request from Client1 authorized, the second Client2 request probably used the same connection group and authorized as Client1. I tried to change Keep-Alive CloseConnection settings, but that didn't help so I finally had to create a factory using username specific HttpClient. Do you think there a better implementation for it? Thanks
If you don't know about these...worth exploring...
https://github.com/jstedfast/MimeKit https://github.com/jstedfast/MailKit
@Andrew - thanks for these links. I had no idea - looks like WebEncoders.Base64UrlEncode(bytes) (and decode) provide this very functionality I describe in this post.
WebEncoders.Base64UrlEncode(bytes)
Updated the post with more info. Thanks!
It would seem .NET is set to receive first class support for Base64 URL encode / decode.
https://github.com/dotnet/runtime/issues/1658
https://github.com/dotnet/runtime/pull/102364
It does seem like there are good reasons they decided to not include it originally (or enable it by default .net 6+).
If you could solve your locked files during web deploy problem another way you wouldn't need the extra complexity/fragility of shadow copy?
I'm curious what the web deploy command syntax produced by right click deploy looks like, ours (from azure pipelines iis deploy) tends to look like this:
msdeploy.exe -verb:sync -source:package='C:\dev\somesite\somesite.zip' -dest:contentPath='somesite.somedomain.org' -enableRule:AppOffline -skip:Directory=App_Data
Haven't yet seen it fail for locked files, across about 30 sites being updated semi regularly and a few monthly (to self hosted vms) via azure pipelines.
That's nice and all , but how do we do it without putting password in the code for example if user logged in using windows authentication?